关于封面插图

About the cover illustration

杰弗里斯

Jefferys

Microservices Patterns 封面上的图标题为“1568 年莫里斯科奴隶的习惯”。插图被拍摄 摘自托马斯·杰弗里斯 (Thomas Jefferys) 的《古代和现代不同民族的服饰集》(四卷),伦敦,1757 年至 1772 年间出版。扉页指出这些是手绘彩色铜版画, 用阿拉伯树胶增强。

The figure on the cover of Microservices Patterns is captioned “Habit of a Morisco Slave in 1568.” The illustration is taken from Thomas Jefferys’ A Collection of the Dresses of Different Nations, Ancient and Modern (four volumes), London, published between 1757 and 1772. The title page states that these are hand-colored copperplate engravings, heightened with gum arabic.

托马斯·杰弗里斯(Thomas Jefferys,1719-1771 年)被称为“乔治三世国王的地理学家”。他是一位英国制图师,是领先的 他那个时代的地图供应商。他为政府和其他官方机构雕刻和印刷地图,并制作了广泛的 商业地图和地图集,尤其是北美的地图和地图集。他作为地图制作者的工作激发了他对当地着装习俗的兴趣 他测量和绘制的土地,这些土地在本收藏中得到了精彩的展示。对遥远土地的迷恋和 休闲旅行在 18 世纪后期是相对较新的现象,像这样的系列很受欢迎。 将 Tourist 和 Armchair Traveler 介绍给其他国家的居民。

Thomas Jefferys (1719–1771) was called “Geographer to King George III.” He was an English cartographer who was the leading map supplier of his day. He engraved and printed maps for government and other official bodies and produced a wide range of commercial maps and atlases, especially of North America. His work as a map maker sparked an interest in local dress customs of the lands he surveyed and mapped, which are brilliantly displayed in this collection. Fascination with faraway lands and travel for pleasure were relatively new phenomena in the late 18th century, and collections such as this one were popular, introducing both the tourist as well as the armchair traveler to the inhabitants of other countries.

杰弗里斯卷中图画的多样性生动地说明了世界各国的独特性和个性 大约 200 年前。从那时起,着装要求发生了变化,地区和国家的多样性在当时是如此丰富 消失。现在,通常很难区分一个大陆和另一个大陆的居民。也许,试图乐观地看待它, 我们已经用文化和视觉的多样性换取了更多样化的个人生活——或者更多样化、更有趣的知识分子 和技术生活。

The diversity of the drawings in Jefferys’ volumes speaks vividly of the uniqueness and individuality of the world’s nations some 200 years ago. Dress codes have changed since then, and the diversity by region and country, so rich at the time, has faded away. It’s now often hard to tell the inhabitants of one continent from another. Perhaps, trying to view it optimistically, we’ve traded a cultural and visual diversity for a more varied personal life—or a more varied and interesting intellectual and technical life.

在很难区分一本电脑书和另一本电脑书的时代,曼宁赞扬了人们的创造力和主动性 计算机业务,书籍封面基于两个世纪前丰富多样的地区生活,带回来 栩栩如生。

At a time when it’s difficult to tell one computer book from another, Manning celebrates the inventiveness and initiative of the computer business with book covers based on the rich diversity of regional life of two centuries ago, brought back to life by Jeffreys’ pictures.

盖

微服务模式

Microservices Patterns

克里斯·理查森

Chris Richardson

版权

Copyright

有关此书和其他曼宁书籍的在线信息和订购,请访问 www.manning.com。出版商在批量订购时为这本书提供折扣。欲了解更多信息,请联系

For online information and ordering of this and other Manning books, please visit www.manning.com. The publisher offers discounts on this book when ordered in quantity. For more information, please contact

       Special Sales Department
       Manning Publications Co.
       20 Baldwin Road
       PO Box 761
       Shelter Island, NY 11964
       Email: orders@manning.com
       Special Sales Department
       Manning Publications Co.
       20 Baldwin Road
       PO Box 761
       Shelter Island, NY 11964
       Email: orders@manning.com

©2019 年由 Chris Richardson 撰写。保留所有权利。

©2019 by Chris Richardson. All rights reserved.

不得以任何形式或电子方式复制、存储在检索系统中或传播本出版物的任何部分。 未经出版商事先书面许可,机械、影印或其他方式。

No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by means electronic, mechanical, photocopying, or otherwise, without prior written permission of the publisher.

制造商和卖家用来区分其商品的许多名称都声称是商标。哪里 这些名称出现在书中,曼宁出版社知道有商标索赔,这些名称已经 以首字母大写字母或全部大写字母打印。

Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in the book, and Manning Publications was aware of a trademark claim, the designations have been printed in initial caps or all caps.

曼宁认识到保存已写内容的重要性,因此将我们出版的书籍印刷出来是 在无酸纸上,我们为此尽最大努力。也认识到我们有责任保护资源 曼宁的书籍是用至少 15% 的回收和加工而成的纸张印刷的,不使用元素 氯。

Recognizing the importance of preserving what has been written, it is Manning’s policy to have the books we publish printed on acid-free paper, and we exert our best efforts to that end. Recognizing also our responsibility to conserve the resources of our planet, Manning books are printed on paper that is at least 15 percent recycled and processed without the use of elemental chlorine.

Manning Publications Co.
20 Baldwin Road
PO Box 761
Shelter Island, NY 11964
Manning Publications Co.
20 Baldwin Road
PO Box 761
Shelter Island, NY 11964
Development editor: Marina Michaels
Technical development editor: Christian Mennerich
Review editor: Aleksandar Dragosavljević
Project editor: Lori Weidert
Copy editor: Corbin Collins
Proofreader: Alyson Brener
Technical proofreader: Andy Miles
Typesetter: Dennis Dalinnik
Cover designer: Marija Tudor
Development editor: Marina Michaels
Technical development editor: Christian Mennerich
Review editor: Aleksandar Dragosavljević
Project editor: Lori Weidert
Copy editor: Corbin Collins
Proofreader: Alyson Brener
Technical proofreader: Andy Miles
Typesetter: Dennis Dalinnik
Cover designer: Marija Tudor

国际标准书号: 9781617294549

ISBN: 9781617294549

美国印刷

Printed in the United States of America

1 2 3 4 5 6 7 8 9 10 – DP – 23 22 21 20 19 18

1 2 3 4 5 6 7 8 9 10 – DP – 23 22 21 20 19 18

奉献

Dedication

当你看到错误、不平等或不公正的地方,就大声说出来,因为这是你的国家。这是你们的民主。来得及。 保护它。传递它。

瑟古德·马歇尔(Thurgood Marshall),最高法院大法官

Where you see wrong or inequality or injustice, speak out, because this is your country. This is your democracy. Make it. Protect it. Pass it on.

Thurgood Marshall, Justice of the Supreme Court

目录

Table of Contents

版权

简要目录

目录

前言

确认

关于本书

关于封面插图

Copyright

Brief Table of Contents

Table of Contents

Preface

Acknowledgments

About this book

About the cover illustration

第 1 章.逃离巨石地狱

1.1. 缓慢地迈向铁板一块的地狱

1.1.1. FTGO 应用程序的架构

1.1.2. 单体架构的好处

1.1.3. 生活在铁板一块的地狱中

1.2. 为什么这本书与您相关

1.3. 您将在本书中学到什么

1.4. 微服务架构来救援

1.4.1. 扩展 cube 和微服务

1.4.2. 微服务作为模块化的一种形式

1.4.3. 每个服务都有自己的数据库

1.4.4. FTGO 微服务架构

1.4.5. 微服务架构和 SOA 的比较

1.5. 微服务架构的优缺点

1.5.1. 微服务架构的优势

1.5.2. 微服务架构的缺点

1.6. 微服务架构模式语言

1.6.1. 微服务架构不是灵丹妙药

1.6.2. 模式和模式语言

1.6.3. 微服务架构模式语言概述

1.7. 超越微服务:流程和组织

1.7.1. 软件开发和交付组织

1.7.2. 软件开发和交付流程

1.7.3. 采用微服务的人性一面

总结

第 2 章.分解策略

2.1. 微服务架构到底是什么?

2.1.1. 什么是软件架构,为什么它很重要?

2.1.2. 架构风格概述

2.1.3. 微服务架构是一种架构风格

2.2. 定义应用程序的微服务架构

2.2.1. 识别系统操作

2.2.2. 通过应用 Decompose by business 功能模式来定义服务

2.2.3. 通过应用 Decompose by sub-domain 模式来定义服务

2.2.4. 分解准则

2.2.5. 将应用程序分解为 Service 的障碍

2.2.6. 定义服务 API

总结

第 3 章.微服务架构中的进程间通信

3.1. 微服务架构中的进程间通信概述

3.1.1. 交互样式

3.1.2. 在微服务架构中定义 API

3.1.3. 不断发展的 API

3.1.4. 消息格式

3.2. 使用同步 Remote 过程调用模式进行通信

3.2.1. 使用 REST

3.2.2. 使用 gRPC

3.2.3. 使用 Circuit breaker 模式处理部分故障

3.2.4. 使用服务发现

3.3. 使用 Asynchronous messaging 模式进行通信

3.3.1. 消息传递概述

3.3.2. 使用消息传递实现交互样式

3.3.3. 为基于消息传递的服务 API 创建 API 规范

3.3.4. 使用消息代理

3.3.5. 竞争接收方和消息排序

3.3.6. 处理重复消息

3.3.7. 事务型消息传递

3.3.8. 消息传递库和框架

3.4. 使用异步消息传递提高可用性

3.4.1. 同步通信会降低可用性

3.4.2. 消除同步交互

总结

第 4 章.使用 saga 管理事务

4.1. 微服务架构中的事务管理

4.1.1. 微服务架构中对分布式事务的需求

4.1.2. 分布式事务的麻烦

4.1.3. 使用 Saga 模式保持数据一致性

4.2. 协调 saga

4.2.1. 基于 Choreography 的 Saga

4.2.2. 基于编排的 Sagas

4.3. 处理 isolation 的缺失

4.3.1. 异常概述

4.3.2. 处理缺乏隔离的对策

4.4. Order Service 和 Create Order Saga 的设计

4.4.1. OrderService 类

4.4.2. Create Order Saga 的实现

4.4.3. OrderCommandHandlers 类

4.4.4. OrderServiceConfiguration 类

总结

第 5 章.在微服务架构中设计业务逻辑

5.1. 业务逻辑组织模式

5.1.1. 使用 Transaction 脚本模式设计业务逻辑

5.1.2. 使用 Domain 模型模式设计业务逻辑

5.1.3. 关于域驱动设计

5.2. 使用 DDD 聚合模式设计域模型

5.2.1. 模糊边界的问题

5.2.2. 聚合具有显式边界

5.2.3. 聚合规则

5.2.4. 聚合粒度

5.2.5. 使用聚合设计业务逻辑

5.3. 发布域事件

5.3.1. 为什么要发布更改事件?

5.3.2. 什么是域事件?

5.3.3. 事件扩充

5.3.4. 识别域事件

5.3.5. 生成和发布域事件

5.3.6. 使用域事件

5.4. Kitchen Service 业务逻辑

5.4.1. Ticket 聚合

5.5. Order Service 业务逻辑

5.5.1. 订单聚合

5.5.2. OrderService 类

总结

第 6 章.使用事件溯源开发业务逻辑

6.1. 使用事件溯源开发业务逻辑

6.1.1. 传统持久化的问题

6.1.2. 事件溯源概述

6.1.3. 使用乐观锁定处理并发更新

6.1.4. 事件溯源和发布事件

6.1.5. 使用快照提高性能

6.1.6. 幂等消息处理

6.1.7. 不断发展的域事件

6.1.8. 事件溯源的好处

6.1.9. 事件溯源的缺点

6.2. 实现事件存储

6.2.1. Eventuate Local 事件存储如何工作

6.2.2. 适用于 Java 的 Eventuate 客户端框架

6.3. 结合使用 saga 和事件溯源

6.3.1. 使用事件溯源实现基于 编排的 Sagas

6.3.2. 创建基于编排的 saga

6.3.3. 实现基于事件溯源的 saga 参与者

6.3.4. 使用事件溯源实现 saga 编排器

总结

第 7 章.在微服务架构中实现查询

7.1. 使用 API 组合模式进行查询

7.1.1. findOrder() 查询操作

7.1.2. API 组合模式概述

7.1.3. 使用 API 组合模式实现 findOrder() 查询操作

7.1.4. API 组合设计问题

7.1.5. API 组合模式的优缺点

7.2. 使用 CQRS 模式

7.2.1. 使用 CQRS 的动机

7.2.2. CQRS 概述

7.2.3. CQRS 的优势

7.2.4. CQRS 的缺点

7.3. 设计 CQRS 视图

7.3.1. 选择视图数据存储

7.3.2. 数据访问模块设计

7.3.3. 添加和更新 CQRS 视图

7.4. 使用 AWS DynamoDB 实现 CQRS 视图

7.4.1. OrderHistoryEventHandlers 模块

7.4.2. 使用 DynamoDB 进行数据建模和查询设计

7.4.3. OrderHistoryDaoDynamoDb 类

总结

第 8 章.外部 API 模式

8.1. 外部 API 设计问题

8.1.1. FTGO 移动客户端的 API 设计问题

8.1.2. 其他类型客户端的 API 设计问题

8.2. API 网关模式

8.2.1. API 网关模式概述

8.2.2. API 网关的优缺点

8.2.3. Netflix 作为 API 网关的示例

8.2.4. API 网关设计问题

8.3. 实现 API 网关

8.3.1. 使用现成的 API 网关产品/服务

8.3.2. 开发您自己的 API 网关

8.3.3. 使用 GraphQL 实现 API 网关

总结

第 9 章.测试微服务:第 1 部分

9.1. 微服务架构的测试策略

9.1.1. 测试概述

9.1.2. 测试微服务的挑战

9.1.3. 部署管道

9.2. 为服务编写单元测试

9.2.1. 为实体开发单元测试

9.2.2. 为值对象编写单元测试

9.2.3. 为 saga 开发单元测试

9.2.4. 为域服务编写单元测试

9.2.5. 为控制器开发单元测试

9.2.6. 为事件和消息处理程序编写单元测试

总结

第 10 章.测试微服务:第 2 部分

10.1. 编写集成测试

10.1.1. 持久化集成测试

10.1.2. 集成测试基于 REST 的请求 / 响应样式交互

10.1.3. 集成测试发布 / 订阅风格的交互

10.1.4. 异步请求 / 响应交互的集成合约测试

10.2. 开发组件测试

10.2.1. 定义验收测试

10.2.2. 使用 Gherkin 编写验收测试

10.2.3. 设计组件测试

10.2.4. 为 FTGO Order Service 编写组件测试

10.3. 编写端到端测试

10.3.1. 设计端到端测试

10.3.2. 编写端到端测试

10.3.3. 运行端到端测试

总结

第 11 章.开发生产就绪型 Service

11.1. 开发安全服务

11.1.1. 传统整体式应用程序中的安全性概述

11.1.2. 在微服务架构中实现安全性

11.2. 设计可配置的服务

11.2.1. 使用基于推送的外部化配置

11.2.2. 使用基于拉取的外部化配置

11.3. 设计 observable 服务

11.3.1. 使用健康检查 API 模式

11.3.2. 应用 Log 聚合模式

11.3.3. 使用分布式跟踪模式

11.3.4. 应用 Application metrics 模式

11.3.5. 使用 Exception 跟踪模式

11.3.6. 应用 Audit 日志记录模式

11.4. 使用微服务 chassis 模式开发服务

11.4.1. 使用微服务机箱

11.4.2. 从微服务机箱到服务网格

总结

第 12 章.部署微服务

12.1. 使用 Language-specific packaging format 模式部署服务

12.1.1. 服务作为特定于语言的包模式的好处

12.1.2. Service 作为特定于语言的包模式的缺点

12.2. 使用 Service 作为虚拟机模式部署服务

12.2.1. 将服务部署为 VM 的好处

12.2.2. 将服务部署为 VM 的缺点

12.3. 使用 Service 作为容器模式部署服务

12.3.1. 使用 Docker 部署服务

12.3.2. 将服务部署为容器的好处

12.3.3. 将服务部署为容器的缺点

12.4. 使用 Kubernetes 部署 FTGO 应用程序

12.4.1. Kubernetes 概述

12.4.2. 在 Kubernetes 上部署 Restaurant 服务

12.4.3. 部署 API 网关

12.4.4. 零停机时间部署

12.4.5. 使用服务网格将部署与发布分开

12.5. 使用 Serverless 部署模式部署服务

12.5.1. 使用 AWS Lambda 进行无服务器部署

12.5.2. 开发 lambda 函数

12.5.3. 调用 lambda 函数

12.5.4. 使用 lambda 函数的好处

12.5.5. 使用 lambda 函数的缺点

12.6. 使用 AWS Lambda 和 AWS Gateway 部署 RESTful 服务

12.6.1. Restaurant Service 的 AWS Lambda 版本的设计

12.6.2. 将服务打包为 ZIP 文件

12.6.3. 使用 Serverless 框架部署 lambda 函数

总结

第 13 章.重构为微服务

13.1. 重构为微服务概述

13.1.1. 为什么要重构单体式架构?

13.1.2. 扼杀 Monolith

13.2. 将单体式架构重构为微服务的策略

13.2.1. 将新功能实现为服务

13.2.2. 将表示层与后端分开

13.2.3. 将业务能力提取到服务中

13.3. 设计服务和 Monolith 的协作方式

13.3.1. 设计集成胶水

13.3.2. 维护服务和单体式应用之间的数据一致性

13.3.3. 处理身份验证和授权

13.4. 将新功能实现为服务:处理错误交付的订单

13.4.1. Delayed Delivery Service 的设计

13.4.2. 为 Delayed Delivery Service 设计集成胶水

13.5. 打破整体式架构:提取交付管理

13.5.1. 现有投放管理功能概述

13.5.2. Delivery Service 概述

13.5.3. 设计 Delivery Service 域模型

13.5.4. Delivery Service 集成胶水的设计

13.5.5. 更改 FTGO 单体以与 Delivery Service 交互

总结

  

模式列表

应用程序架构模式

分解模式

消息样式模式

可靠的通信模式

服务发现模式

事务型消息传递模式

数据一致性模式

业务逻辑设计模式

查询模式

外部 API 模式

测试模式

安全模式

横切关注点模式

可观测性模式

部署模式

重构为微服务模式

  

Chapter 1. Escaping monolithic hell

1.1. The slow march toward monolithic hell

1.1.1. The architecture of the FTGO application

1.1.2. The benefits of the monolithic architecture

1.1.3. Living in monolithic hell

1.2. Why this book is relevant to you

1.3. What you’ll learn in this book

1.4. Microservice architecture to the rescue

1.4.1. Scale cube and microservices

1.4.2. Microservices as a form of modularity

1.4.3. Each service has its own database

1.4.4. The FTGO microservice architecture

1.4.5. Comparing the microservice architecture and SOA

1.5. Benefits and drawbacks of the microservice architecture

1.5.1. Benefits of the microservice architecture

1.5.2. Drawbacks of the microservice architecture

1.6. The Microservice architecture pattern language

1.6.1. Microservice architecture is not a silver bullet

1.6.2. Patterns and pattern languages

1.6.3. Overview of the Microservice architecture pattern language

1.7. Beyond microservices: Process and organization

1.7.1. Software development and delivery organization

1.7.2. Software development and delivery process

1.7.3. The human side of adopting microservices

Summary

Chapter 2. Decomposition strategies

2.1. What is the microservice architecture exactly?

2.1.1. What is software architecture and why does it matter?

2.1.2. Overview of architectural styles

2.1.3. The microservice architecture is an architectural style

2.2. Defining an application’s microservice architecture

2.2.1. Identifying the system operations

2.2.2. Defining services by applying the Decompose by business capability pattern

2.2.3. Defining services by applying the Decompose by sub-domain pattern

2.2.4. Decomposition guidelines

2.2.5. Obstacles to decomposing an application into services

2.2.6. Defining service APIs

Summary

Chapter 3. Interprocess communication in a microservice architecture

3.1. Overview of interprocess communication in a microservice architecture

3.1.1. Interaction styles

3.1.2. Defining APIs in a microservice architecture

3.1.3. Evolving APIs

3.1.4. Message formats

3.2. Communicating using the synchronous Remote procedure invocation pattern

3.2.1. Using REST

3.2.2. Using gRPC

3.2.3. Handling partial failure using the Circuit breaker pattern

3.2.4. Using service discovery

3.3. Communicating using the Asynchronous messaging pattern

3.3.1. Overview of messaging

3.3.2. Implementing the interaction styles using messaging

3.3.3. Creating an API specification for a messaging-based service API

3.3.4. Using a message broker

3.3.5. Competing receivers and message ordering

3.3.6. Handling duplicate messages

3.3.7. Transactional messaging

3.3.8. Libraries and frameworks for messaging

3.4. Using asynchronous messaging to improve availability

3.4.1. Synchronous communication reduces availability

3.4.2. Eliminating synchronous interaction

Summary

Chapter 4. Managing transactions with sagas

4.1. Transaction management in a microservice architecture

4.1.1. The need for distributed transactions in a microservice architecture

4.1.2. The trouble with distributed transactions

4.1.3. Using the Saga pattern to maintain data consistency

4.2. Coordinating sagas

4.2.1. Choreography-based sagas

4.2.2. Orchestration-based sagas

4.3. Handling the lack of isolation

4.3.1. Overview of anomalies

4.3.2. Countermeasures for handling the lack of isolation

4.4. The design of the Order Service and the Create Order Saga

4.4.1. The OrderService class

4.4.2. The implementation of the Create Order Saga

4.4.3. The OrderCommandHandlers class

4.4.4. The OrderServiceConfiguration class

Summary

Chapter 5. Designing business logic in a microservice architecture

5.1. Business logic organization patterns

5.1.1. Designing business logic using the Transaction script pattern

5.1.2. Designing business logic using the Domain model pattern

5.1.3. About Domain-driven design

5.2. Designing a domain model using the DDD aggregate pattern

5.2.1. The problem with fuzzy boundaries

5.2.2. Aggregates have explicit boundaries

5.2.3. Aggregate rules

5.2.4. Aggregate granularity

5.2.5. Designing business logic with aggregates

5.3. Publishing domain events

5.3.1. Why publish change events?

5.3.2. What is a domain event?

5.3.3. Event enrichment

5.3.4. Identifying domain events

5.3.5. Generating and publishing domain events

5.3.6. Consuming domain events

5.4. Kitchen Service business logic

5.4.1. The Ticket aggregate

5.5. Order Service business logic

5.5.1. The Order Aggregate

5.5.2. The OrderService class

Summary

Chapter 6. Developing business logic with event sourcing

6.1. Developing business logic using event sourcing

6.1.1. The trouble with traditional persistence

6.1.2. Overview of event sourcing

6.1.3. Handling concurrent updates using optimistic locking

6.1.4. Event sourcing and publishing events

6.1.5. Using snapshots to improve performance

6.1.6. Idempotent message processing

6.1.7. Evolving domain events

6.1.8. Benefits of event sourcing

6.1.9. Drawbacks of event sourcing

6.2. Implementing an event store

6.2.1. How the Eventuate Local event store works

6.2.2. The Eventuate client framework for Java

6.3. Using sagas and event sourcing together

6.3.1. Implementing choreography-based sagas using event sourcing

6.3.2. Creating an orchestration-based saga

6.3.3. Implementing an event sourcing-based saga participant

6.3.4. Implementing saga orchestrators using event sourcing

Summary

Chapter 7. Implementing queries in a microservice architecture

7.1. Querying using the API composition pattern

7.1.1. The findOrder() query operation

7.1.2. Overview of the API composition pattern

7.1.3. Implementing the findOrder() query operation using the API composition pattern

7.1.4. API composition design issues

7.1.5. The benefits and drawbacks of the API composition pattern

7.2. Using the CQRS pattern

7.2.1. Motivations for using CQRS

7.2.2. Overview of CQRS

7.2.3. The benefits of CQRS

7.2.4. The drawbacks of CQRS

7.3. Designing CQRS views

7.3.1. Choosing a view datastore

7.3.2. Data access module design

7.3.3. Adding and updating CQRS views

7.4. Implementing a CQRS view with AWS DynamoDB

7.4.1. The OrderHistoryEventHandlers module

7.4.2. Data modeling and query design with DynamoDB

7.4.3. The OrderHistoryDaoDynamoDb class

Summary

Chapter 8. External API patterns

8.1. External API design issues

8.1.1. API design issues for the FTGO mobile client

8.1.2. API design issues for other kinds of clients

8.2. The API gateway pattern

8.2.1. Overview of the API gateway pattern

8.2.2. Benefits and drawbacks of an API gateway

8.2.3. Netflix as an example of an API gateway

8.2.4. API gateway design issues

8.3. Implementing an API gateway

8.3.1. Using an off-the-shelf API gateway product/service

8.3.2. Developing your own API gateway

8.3.3. Implementing an API gateway using GraphQL

Summary

Chapter 9. Testing microservices: Part 1

9.1. Testing strategies for microservice architectures

9.1.1. Overview of testing

9.1.2. The challenge of testing microservices

9.1.3. The deployment pipeline

9.2. Writing unit tests for a service

9.2.1. Developing unit tests for entities

9.2.2. Writing unit tests for value objects

9.2.3. Developing unit tests for sagas

9.2.4. Writing unit tests for domain services

9.2.5. Developing unit tests for controllers

9.2.6. Writing unit tests for event and message handlers

Summary

Chapter 10. Testing microservices: Part 2

10.1. Writing integration tests

10.1.1. Persistence integration tests

10.1.2. Integration testing REST-based request/response style interactions

10.1.3. Integration testing publish/subscribe-style interactions

10.1.4. Integration contract tests for asynchronous request/response interactions

10.2. Developing component tests

10.2.1. Defining acceptance tests

10.2.2. Writing acceptance tests using Gherkin

10.2.3. Designing component tests

10.2.4. Writing component tests for the FTGO Order Service

10.3. Writing end-to-end tests

10.3.1. Designing end-to-end tests

10.3.2. Writing end-to-end tests

10.3.3. Running end-to-end tests

Summary

Chapter 11. Developing production-ready services

11.1. Developing secure services

11.1.1. Overview of security in a traditional monolithic application

11.1.2. Implementing security in a microservice architecture

11.2. Designing configurable services

11.2.1. Using push-based externalized configuration

11.2.2. Using pull-based externalized configuration

11.3. Designing observable services

11.3.1. Using the Health check API pattern

11.3.2. Applying the Log aggregation pattern

11.3.3. Using the Distributed tracing pattern

11.3.4. Applying the Application metrics pattern

11.3.5. Using the Exception tracking pattern

11.3.6. Applying the Audit logging pattern

11.4. Developing services using the Microservice chassis pattern

11.4.1. Using a microservice chassis

11.4.2. From microservice chassis to service mesh

Summary

Chapter 12. Deploying microservices

12.1. Deploying services using the Language-specific packaging format pattern

12.1.1. Benefits of the Service as a language-specific package pattern

12.1.2. Drawbacks of the Service as a language-specific package pattern

12.2. Deploying services using the Service as a virtual machine pattern

12.2.1. The benefits of deploying services as VMs

12.2.2. The drawbacks of deploying services as VMs

12.3. Deploying services using the Service as a container pattern

12.3.1. Deploying services using Docker

12.3.2. Benefits of deploying services as containers

12.3.3. Drawbacks of deploying services as containers

12.4. Deploying the FTGO application with Kubernetes

12.4.1. Overview of Kubernetes

12.4.2. Deploying the Restaurant service on Kubernetes

12.4.3. Deploying the API gateway

12.4.4. Zero-downtime deployments

12.4.5. Using a service mesh to separate deployment from release

12.5. Deploying services using the Serverless deployment pattern

12.5.1. Overview of serverless deployment with AWS Lambda

12.5.2. Developing a lambda function

12.5.3. Invoking lambda functions

12.5.4. Benefits of using lambda functions

12.5.5. Drawbacks of using lambda functions

12.6. Deploying a RESTful service using AWS Lambda and AWS Gateway

12.6.1. The design of the AWS Lambda version of Restaurant Service

12.6.2. Packaging the service as ZIP file

12.6.3. Deploying lambda functions using the Serverless framework

Summary

Chapter 13. Refactoring to microservices

13.1. Overview of refactoring to microservices

13.1.1. Why refactor a monolith?

13.1.2. Strangling the monolith

13.2. Strategies for refactoring a monolith to microservices

13.2.1. Implement new features as services

13.2.2. Separate presentation tier from the backend

13.2.3. Extract business capabilities into services

13.3. Designing how the service and the monolith collaborate

13.3.1. Designing the integration glue

13.3.2. Maintaining data consistency across a service and a monolith

13.3.3. Handling authentication and authorization

13.4. Implementing a new feature as a service: handling misdelivered orders

13.4.1. The design of Delayed Delivery Service

13.4.2. Designing the integration glue for Delayed Delivery Service

13.5. Breaking apart the monolith: extracting delivery management

13.5.1. Overview of existing delivery management functionality

13.5.2. Overview of Delivery Service

13.5.3. Designing the Delivery Service domain model

13.5.4. The design of the Delivery Service integration glue

13.5.5. Changing the FTGO monolith to interact with Delivery Service

Summary

  

List of Patterns

Application architecture patterns

Decomposition patterns

Messaging style patterns

Reliable communications patterns

Service discovery patterns

Transactional messaging patterns

Data consistency patterns

Business logic design patterns

Querying patterns

External API patterns

Testing patterns

Security patterns

Cross-cutting concerns patterns

Observability patterns

Deployment patterns

Refactoring to microservices patterns

  

指数

Index

图表列表

List of Figures

表格列表

List of Tables

列表

List of Listings

前言

Preface

我最喜欢的一句话是

One of my favorite quotes is

未来已经到来 — 只是分布不是很均匀。

威廉·吉布森,科幻小说作家

The future is already here—it’s just not very evenly distributed.

William Gibson, science fiction author

这句话的精髓是,新的想法和技术需要一段时间才能在社区中传播并被广泛采用。 思想缓慢传播的一个很好的例子是我如何发现微服务的故事。它始于 2006 年,当时,在 受到 AWS 宣传官的演讲的启发,我开始走上了一条最终导致我创建原始 Cloud Foundry 的。(与今天的 Cloud Foundry 唯一的共同点是名称。Cloud Foundry 是一种平台即服务 (PaaS) 在 EC2 上自动部署 Java 应用程序。与我构建的所有其他企业 Java 应用程序一样, 我的 Cloud Foundry 有一个整体架构,由一个 Java Web 应用程序存档 (WAR) 文件组成。

The essence of that quote is that new ideas and technology take a while to diffuse through a community and become widely adopted. A good example of the slow diffusion of ideas is the story of how I discovered microservices. It began in 2006, when, after being inspired by a talk given by an AWS evangelist, I started down a path that ultimately led to my creating the original Cloud Foundry. (The only thing in common with today’s Cloud Foundry is the name.) Cloud Foundry was a Platform-as-a-Service (PaaS) for automating the deployment of Java applications on EC2. Like every other enterprise Java application that I’d built, my Cloud Foundry had a monolith architecture consisting of a single Java Web Application Archive (WAR) file.

将配置、配置、监控和管理等各种复杂的功能捆绑到一个整体中 带来了开发和运营方面的挑战。例如,如果不测试和重新部署,则无法更改 UI 整个应用程序。而且,由于监控和管理组件依赖于复杂事件处理 (CEP) 引擎 保持内存中状态时,我们无法运行应用程序的多个实例!承认这很尴尬,但是 我只能说,我是一名软件开发人员,并且,“让无罪的人投下第一块石头。

Bundling a diverse and complex set of functions such as provisioning, configuration, monitoring, and management into a monolith created both development and operations challenges. You couldn’t, for example, change the UI without testing and redeploying the entire application. And because the monitoring and management component relied on a Complex Event Processing (CEP) engine which maintained in-memory state we couldn’t run multiple instances of the application! That’s embarrassing to admit, but all I can say is that I am a software developer, and, “let he who is without sin cast the first stone.”

显然,该应用程序很快就超出了其整体架构,但有什么选择呢?答案是 在 eBay 和 Amazon 等公司从事软件社区工作了一段时间。例如,Amazon 已经开始迁移 在 2002 年左右(https://plus.google.com/110981030061712822816/posts/AaygmbzVeRq 年)远离巨石。新架构用松散耦合的服务集合取代了整体式架构。服务归 Amazon 所有 召集两个披萨团队 — 小到可以吃两个披萨的团队。

Clearly, the application had quickly outgrown its monolith architecture, but what was the alternative? The answer had been out in the software community for some time at companies such as eBay and Amazon. Amazon had, for example, started to migrate away from the monolith around 2002 (https://plus.google.com/110981030061712822816/posts/AaygmbzVeRq). The new architecture replaced the monolith with a collection of loosely coupled services. Services are owned by what Amazon calls two-pizza teams—teams small enough to be fed by two pizzas.

Amazon 采用这种架构来加快软件开发的速度,以便公司可以更快地进行创新 并更有效地竞争。结果令人印象深刻:据报道,Amazon 每 11.6 秒就会将更改部署到生产环境中!

Amazon had adopted this architecture to accelerate the rate of software development so that the company could innovate faster and compete more effectively. The results are impressive: Amazon reportedly deploys changes into production every 11.6 seconds!

2010 年初,在我转向其他项目后,软件架构的未来终于赶上了我。那是 当我阅读 Michael T. Fisher 和 Martin L. Abbott 所著的《可伸缩性的艺术:现代企业的可扩展 Web 体系结构、流程和组织》(The Art of Scalability: Scalable Web Architecture, Processes, and Organizations for the Modern Enterprise)(Addison-Wesley Professional,2009 年)一书时。那本书中的一个关键思想是 scale cube, 如第 2 章所述,这是一个用于扩展应用程序的 3D 模型。由刻度立方体定义的 Y 轴缩放在功能上分解 一个应用程序进入服务。事后看来,这是很明显的,但对当时的我来说,这是一个顿悟的时刻!我可以 通过将 Cloud Foundry 构建为一组服务,解决了我两年前面临的挑战!

In early 2010, after I’d moved on to other projects, the future of software architecture finally caught up with me. That’s when I read the book The Art of Scalability: Scalable Web Architecture, Processes, and Organizations for the Modern Enterprise (Addison-Wesley Professional, 2009) by Michael T. Fisher and Martin L. Abbott. A key idea in that book is the scale cube, which, as described in chapter 2, is a three-dimensional model for scaling an application. The Y-axis scaling defined by the scale cube functionally decomposes an application into services. In hindsight, this was quite obvious, but for me at the time, it was an a-ha moment! I could have solved the challenges I was facing two years earlier by architecting Cloud Foundry as a set of services!

在 2012 年 4 月,我就这种架构方法发表了我的第一次演讲,题为“分解可部署性和 可扩展性“(www.slideshare.net/chris.e.richardson/decomposing-applications-for-scalability-and-deployability-april-2012)。当时,这种架构还没有一个普遍接受的术语。我有时称它为模块化、多语言 体系结构,因为服务可以用不同的语言编写。

In April 2012, I gave my first talk on this architectural approach, called “Decomposing Applications of Deployability and Scalability” (www.slideshare.net/chris.e.richardson/decomposing-applications-for-scalability-and-deployability-april-2012). At the time, there wasn’t a generally accepted term for this kind of architecture. I sometimes called it modular, polyglot architecture, because the services could be written in different languages.

但在另一个未来分布不均的例子中,微服务一词在 2011 年的软件架构研讨会上被用来描述这种架构 (https://en.wikipedia.org/wiki/Microservices)。我第一次遇到这个词是在听到 Fred George 在 Oredev 2013 上的演讲时,我喜欢它!

But in another example of how the future is unevenly distributed, the term microservice was used at a software architecture workshop in 2011 to describe this kind of architecture (https://en.wikipedia.org/wiki/Microservices). I first encountered the term when I heard Fred George give a talk at Oredev 2013, and I liked it!

2014 年 1 月,我创建了 https://microservices.io 网站来记录我遇到的架构和设计模式。然后在 2014 年 3 月,詹姆斯·刘易斯 (James Lewis) 和马丁·福勒 (Martin Fowler) 发布了一篇关于微服务 (https://martinfowler.com/articles/microservices.html) 的博客文章。通过普及微服务这个术语,该博客文章使软件社区围绕这个概念进行了整合。

In January 2014, I created the https://microservices.io website to document architecture and design patterns that I had encountered. Then in March 2014, James Lewis and Martin Fowler published a blog post about microservices (https://martinfowler.com/articles/microservices.html). By popularizing the term microservices, the blog post caused the software community to consolidate around the concept.

小型、松散耦合的团队快速可靠地开发和交付微服务的想法正在慢慢传播 通过软件社区。但这种对未来的愿景很可能与您的日常现实大相径庭。 如今,业务关键型企业应用程序通常是由大型团队开发的大型整体式应用程序。软件版本 很少发生,并且通常对所有相关人员都造成痛苦。IT 部门经常难以跟上业务需求。 您想知道究竟如何采用微服务架构。

The idea of small, loosely coupled teams, rapidly and reliably developing and delivering microservices is slowly diffusing through the software community. But it’s likely that this vision of the future is quite different from your daily reality. Today, business-critical enterprise applications are typically large monoliths developed by large teams. Software releases occur infrequently and are often painful for everyone involved. IT often struggles to keep up with the needs of the business. You’re wondering how on earth you can adopt the microservice architecture.

这本书的目标是回答这个问题。它会让你对微服务架构有一个很好的理解, 它的优点和缺点,以及何时使用它。这本书描述了如何解决您将面临的众多设计挑战。 包括如何管理分布式数据。它还介绍了如何将整体式应用程序重构为微服务架构。 但这本书不是微服务宣言。相反,它是围绕一系列模式组织的。模式是可重用的 解决特定上下文中出现的问题。图案的美妙之处在于,除了描述好处 中,它还描述了成功实施解决方案的缺点和必须解决的问题。 根据我的经验,在考虑解决方案时,这种客观性会带来更好的决策。我希望你会 喜欢阅读这本书,它教您如何成功开发微服务。

The goal of this book is to answer that question. It will give you a good understanding of the microservice architecture, its benefits and drawbacks, and when to use it. The book describes how to solve the numerous design challenges you’ll face, including how to manage distributed data. It also covers how to refactor a monolithic application to a microservice architecture. But this book is not a microservices manifesto. Instead, it’s organized around a collection of patterns. A pattern is a reusable solution to a problem that occurs in a particular context. The beauty of a pattern is that besides describing the benefits of the solution, it also describes the drawbacks and the issues you must address in order to successfully implement a solution. In my experience, this kind of objectivity when thinking about solutions leads to much better decision making. I hope you’ll enjoy reading this book and that it teaches you how to successfully develop microservices.

确认

Acknowledgments

虽然写作是一项孤独的活动,但需要很多人才能将草稿变成一本完成的书。

Although writing is a solitary activity, it takes a large number of people to turn rough drafts into a finished book.

首先,我要感谢 Manning 的 Erin Twohey 和 Michael Stevens,他们一直鼓励他们写另一本书。 我还要感谢我的开发编辑 Cynthia Kane 和 Marina Michaels。辛西娅·凯恩 (Cynthia Kane) 让我开始工作 和我一起读前几章。Marina Michaels 接替了 Cynthia,与我一起工作到最后。我将永远感激 感谢 Marina 对我的章节的细致和建设性的批评。我要感谢 Manning 团队的其他成员,他们 一直参与这本书的出版。

First, I want to thank Erin Twohey and Michael Stevens from Manning for their persistent encouragement to write another book. I would also like to thank my development editors, Cynthia Kane and Marina Michaels. Cynthia Kane got me started and worked with me on the first few chapters. Marina Michaels took over from Cynthia and worked with me to the end. I’ll be forever grateful for Marina’s meticulous and constructive critiques of my chapters. And I want to thank the rest of the Manning team who’s been involved in getting this book published.

我要感谢我的技术开发编辑 Christian Mennerich、我的技术校对员 Andy Miles 以及我所有的外部 审稿人:Andy Kirsch、Antonio Pessolano、Areg Melik-Adamyan、Cage Slagel、Carlos Curotto、Dror Helper、Eros Pedrini、Hugo 克鲁兹、伊琳娜·罗曼年科、杰西·罗萨利亚、乔·贾斯特森、约翰·格思里、基尔蒂·谢蒂、米歇尔·毛罗、保罗·格雷本克、彼得鲁·拉吉、 Potito Coluccelli, Shobha Iyer, Simeon Leyzerzon, Srihari Sridharan, Tim Moore, Tony Sweets, Trent Whiteley, Wes Shaddix, William E. Wheeler 和 Zoltan Hamori。

I’d like to thank my technical development editor, Christian Mennerich, my technical proofreader, Andy Miles, and all my external reviewers: Andy Kirsch, Antonio Pessolano, Areg Melik-Adamyan, Cage Slagel, Carlos Curotto, Dror Helper, Eros Pedrini, Hugo Cruz, Irina Romanenko, Jesse Rosalia, Joe Justesen, John Guthrie, Keerthi Shetty, Michele Mauro, Paul Grebenc, Pethuru Raj, Potito Coluccelli, Shobha Iyer, Simeon Leyzerzon, Srihari Sridharan, Tim Moore, Tony Sweets, Trent Whiteley, Wes Shaddix, William E. Wheeler, and Zoltan Hamori.

我还要感谢所有购买 MEAP 并在论坛中或直接向我提供反馈的人。

I also want to thank everyone who purchased the MEAP and provided feedback in the forum or to me directly.

我要感谢我发言过的所有会议和聚会的组织者和与会者,让我有机会 提出和修改我的想法。我要感谢世界各地的咨询和培训客户给我这个机会 帮助他们将我的想法付诸实践。

I want to thank the organizers and attendees of all of the conferences and meetups at which I’ve spoken for the chance to present and revise my ideas. And I want to thank my consulting and training clients around the world for giving me the opportunity to help them put my ideas into practice.

我要感谢我在 Eventuate, Inc. 的同事 Andrew、Valentin、Artem 和 Stanislav,感谢他们对 Eventuate 的贡献 产品和开源项目。

I want to thank my colleagues Andrew, Valentin, Artem, and Stanislav at Eventuate, Inc., for their contributions to the Eventuate product and open source projects.

最后,我要感谢我的妻子 Laura 和我的孩子 Ellie、Thomas 和 Janet 对他们的支持和理解 过去 18 个月。虽然我一直盯着我的笔记本电脑,但我错过了去看 Ellie 的足球比赛,看 Thomas 在他的飞行模拟器上学习飞行,并与 Janet 一起尝试新餐厅。

Finally, I’d like to thank my wife, Laura, and my children, Ellie, Thomas, and Janet for their support and understanding over the last 18 months. While I’ve been glued to my laptop, I’ve missed out on going to Ellie’s soccer games, watching Thomas learning to fly on his flight simulator, and trying new restaurants with Janet.

谢谢大家!

Thank you all!

关于本书

About this book

本书的目标是教您如何使用微服务架构成功开发应用程序。

The goal of this book is to teach you how to successfully develop applications using the microservice architecture.

它不仅讨论了微服务架构的优点,还描述了它的缺点。您将了解何时 您应该考虑使用整体式架构,并在何时使用微服务。

Not only does it discuss the benefits of the microservice architecture, it also describes the drawbacks. You’ll learn when you should consider using the monolithic architecture and when it makes sense to use microservices.

谁应该阅读这本书

Who should read this book

本书的重点是架构和开发。它适用于负责开发和交付的任何人 软件,例如开发人员、架构师、CTO 或工程副总裁。

The focus of this book is on architecture and development. It’s meant for anyone responsible for developing and delivering software, such as developers, architects, CTOs, or VPs of engineering.

本书重点介绍微服务架构模式和其他概念。我的目标是让您找到这个 Material 可访问,无论您使用何种技术堆栈。您只需熟悉企业的基础知识 应用程序架构和设计。特别是,您需要了解三层架构、Web 应用程序等概念 设计、关系数据库、使用消息传递和 REST 的进程间通信,以及应用程序安全性的基础知识。 但是,代码示例使用 Java 和 Spring 框架。为了充分利用它们,您应该熟悉 使用 Spring 框架。

The book focuses on explaining the microservice architecture patterns and other concepts. My goal is for you to find this material accessible, regardless of the technology stack you use. You only need to be familiar with the basics of enterprise application architecture and design. In particular, you need to understand concepts like three-tier architecture, web application design, relational databases, interprocess communication using messaging and REST, and the basics of application security. The code examples, though, use Java and the Spring framework. In order to get the most out of them, you should be familiar with the Spring framework.

路线图

Roadmap

本书由 13 章组成:

This book consists of 13 chapters:

  • 第 1 章描述了整体式地狱的症状,当整体式应用程序超出其架构时,就会出现这种情况,并建议 关于如何通过采用微服务架构来逃脱。它还概述了微服务架构 模式语言,这是本书大部分内容的组织主题。
  • Chapter 1 describes the symptoms of monolithic hell, which occurs when a monolithic application outgrows its architecture, and advises on how to escape by adopting the microservice architecture. It also provides an overview of the microservice architecture pattern language, which is the organizing theme for most of the book.
  • 第 2 章解释了软件架构为何重要,并描述了可用于将应用程序分解为 服务集合。它还解释了如何克服您在此过程中通常会遇到的各种障碍。
  • Chapter 2 explains why software architecture is important and describes the patterns you can use to decompose an application into a collection of services. It also explains how to overcome the various obstacles you typically encounter along the way.
  • 第 3 章介绍了微服务架构中健壮的进程间通信的不同模式。它解释了为什么异步、 基于消息的通信通常是最佳选择。
  • Chapter 3 describes the different patterns for robust, interprocess communication in a microservice architecture. It explains why asynchronous, message-based communication is often the best choice.
  • 第 4 章介绍了如何使用 Saga 模式在服务之间保持数据一致性。saga 是一系列本地事务 使用异步消息传递进行协调。
  • Chapter 4 explains how to maintain data consistency across services by using the Saga pattern. A saga is a sequence of local transactions coordinated using asynchronous messaging.
  • 第 5 章介绍如何使用域驱动设计 (DDD) 聚合和域事件为服务设计业务逻辑 模式。
  • Chapter 5 describes how to design the business logic for a service using the domain-driven design (DDD) Aggregate and Domain event patterns.
  • 第 6 章以第 5 章为基础,解释了如何使用事件溯源模式(一种以事件为中心的方式来构建业务)来开发业务逻辑 logic 和 persist domain 对象。
  • Chapter 6 builds on chapter 5 and explains how to develop business logic using the Event sourcing pattern, an event-centric way to structure the business logic and persist domain objects.
  • 第 7 章介绍了如何使用 API 组合实现检索分散在多个服务中的数据的查询 模式或命令查询责任分离 (CQRS) 模式。
  • Chapter 7 describes how to implement queries that retrieve data scattered across multiple services by using either the API composition pattern or the Command query responsibility segregation (CQRS) pattern.
  • 第 8 章介绍了用于处理来自各种外部客户端(如移动应用程序、 基于浏览器的 JavaScript 应用程序和第三方应用程序。
  • Chapter 8 covers the external API patterns for handling requests from a diverse collection of external clients, such as mobile applications, browser-based JavaScript applications, and third-party applications.
  • 第 9 章是关于微服务自动化测试技术的两章中的第一章。它介绍了重要的测试概念 例如,测试金字塔 (Test Pyramid) 描述了测试套件中每种测试类型的相对比例。它还显示 如何编写单元测试,这些测试构成了测试金字塔的基础。
  • Chapter 9 is the first of two chapters on automated testing techniques for microservices. It introduces important testing concepts such as the test pyramid, which describes the relative proportions of each type of test in your test suite. It also shows how to write unit tests, which form the base of the testing pyramid.
  • 第 10 章第 9 章为基础,描述了如何在测试金字塔中编写其他类型的测试,包括集成测试、消费者契约测试、 和组件测试。
  • Chapter 10 builds on chapter 9 and describes how to write other types of tests in the test pyramid, including integration tests, consumer contract tests, and component tests.
  • 第 11 章涵盖了开发生产就绪服务的各个方面,包括安全性、外部化配置模式、 和服务可观测性模式。服务可观测性模式包括 Log aggregation、Application metrics 和 分布式跟踪。
  • Chapter 11 covers various aspects of developing production-ready services, including security, the Externalized configuration pattern, and the service observability patterns. The service observability patterns include Log aggregation, Application metrics, and Distributed tracing.
  • 第 12 章介绍了可用于部署服务的各种部署模式,包括虚拟机、容器和 无服务器。它还讨论了使用服务网格的好处,服务网格是协调通信的网络软件层 在微服务架构中。
  • Chapter 12 describes the various deployment patterns that you can use to deploy services, including virtual machines, containers, and serverless. It also discusses the benefits of using a service mesh, a layer of networking software that mediates communication in a microservice architecture.
  • 第 13 章介绍了如何通过应用 Strangler 将整体式架构增量重构为微服务架构 应用程序模式:将新功能实现为服务,并从整体中提取模块并转换它们 到服务。
  • Chapter 13 explains how to incrementally refactor a monolithic architecture to a microservice architecture by applying the Strangler application pattern: implementing new features as services and extracting modules out of the monolith and converting them to services.

随着您学习这些章节,您将了解微服务架构的不同方面。

As you progress through these chapters, you’ll learn about different aspects of the microservice architecture.

关于代码

About the code

本书包含许多源代码示例,包括编号列表和与普通文本内联的示例。在这两种情况下,source code 的格式为 a 以将其与普通文本分开。有时,代码也以粗体显示,以突出显示与本章中前面的步骤相比已更改的代码,例如,当新功能添加到现有行时 的代码。在许多情况下,原始源代码已被重新格式化;出版商添加了换行符并重新设计了缩进 以容纳书籍中的可用页面空间。在极少数情况下,即使这样还不够,列表还包括换行符 标记 ()。此外,当文本中描述代码时,源代码中的注释通常会从列表中删除。 许多清单都附有代码注释,突出了重要的概念。fixed-width font like this

This book contains many examples of source code both in numbered listings and inline with normal text. In both cases, source code is formatted in a fixed-width font like this to separate it from ordinary text. Sometimes code is also in bold to highlight code that has changed from previous steps in the chapter, such as when a new feature adds to an existing line of code. In many cases, the original source code has been reformatted; the publisher has added line breaks and reworked indentation to accommodate the available page space in the book. In rare cases, even this was not enough, and listings include line-continuation markers (). Additionally, comments in the source code have often been removed from the listings when the code is described in the text. Code annotations accompany many of the listings, highlighting important concepts.

第 1 章第 2 章和第 13 章外,每一章都包含来自配套示例应用程序的代码。您可以在 GitHub 存储库中找到此应用程序的代码:https://github.com/microservices-patterns/ftgo-application

Every chapter, except chapters 1, 2, and 13, contains code from the companion example application. You can find the code for this application in a GitHub repository: https://github.com/microservices-patterns/ftgo-application.

图书论坛

Book forum

购买 Microservices Patterns 后,您可以免费访问 Manning Publications 运营的私有 Web 论坛 对本书发表评论,提出技术问题,分享您的练习解决方案,并从作者那里获得帮助 以及其他用户。要访问论坛并订阅论坛,请将 Web 浏览器指向 https://forums.manning.com/forums/microservices-patterns。您还可以在 https://forums.manning.com/forums/about 上了解有关 Manning 论坛和行为准则的更多信息。

The purchase of Microservices Patterns includes free access to a private web forum run by Manning Publications where you can make comments about the book, ask technical questions, share your solutions to exercises, and receive help from the author and from other users. To access the forum and subscribe to it, point your web browser to https://forums.manning.com/forums/microservices-patterns. You can also learn more about Manning’s forums and the rules of conduct at https://forums.manning.com/forums/about.

曼宁对读者的承诺是提供一个平台,让个人读者之间以及读者之间进行有意义的对话 读者和作者可以发生。这不是对作者任何特定数量的参与的承诺, 他们对论坛的贡献是自愿的(且无偿的)。我们建议您尝试向作者询问一些具有挑战性的问题 免得他的兴趣走偏了!该论坛和以前讨论的存档可以从出版商的网站访问 只要这本书还在印刷。

Manning’s commitment to our readers is to provide a venue where a meaningful dialogue between individual readers and between readers and the author can take place. It’s not a commitment to any specific amount of participation on the part of the author, whose contribution to the forum remains voluntary (and unpaid). We suggest you try asking the author some challenging questions lest his interest stray! The forum and the archives of previous discussions will be accessible from the publisher’s website as long as the book is in print.

其他在线资源

Other online resources

学习微服务架构的另一个很好的资源是我的网站 http://microservices.io

Another great resource for learning the microservice architecture is my website http://microservices.io.

它不仅包含完整的模式语言,而且还包含指向其他资源的链接,例如文章、演示文稿、 和示例代码。

Not only does it contain the complete pattern language, it also has links to other resources such as articles, presentations, and example code.

作者简介

About the author

Chris Richardson 是一名开发人员和架构师。他是 Java Champion、JavaOne 摇滚明星,也是 POJOs in Action (Manning, 2006) 的作者,该书描述了如何使用 Spring 和 Hibernate 等框架构建企业 Java 应用程序。

Chris Richardson is a developer and architect. He is a Java Champion, a JavaOne rock star, and the author of POJOs in Action (Manning, 2006), which describes how to build enterprise Java applications with frameworks such as Spring and Hibernate.

Chris 还是原始 CloudFoundry.com 的创始人,这是 Amazon EC2 的早期 Java PaaS。

Chris was also the founder of the original CloudFoundry.com, an early Java PaaS for Amazon EC2.

如今,他是微服务领域公认的思想领袖,并定期在国际会议上发表演讲。Chris 是 Microservices.io 的创建者, 是一种微服务的模式语言。他为组织提供微服务咨询和培训 世界各地正在采用微服务架构的Chris 正在开发他的第三家初创公司:Eventuate.io,一个 用于开发事务性微服务的应用程序平台。

Today, he is a recognized thought leader in microservices and speaks regularly at international conferences. Chris is the creator of Microservices.io, a pattern language for microservices. He provides microservices consulting and training to organizations around the world that are adopting the microservice architecture. Chris is working on his third startup: Eventuate.io, an application platform for developing transactional microservices.

第 1 章.逃离巨石地狱

Chapter 1. Escaping monolithic hell

本章涵盖

This chapter covers

  • 单体地狱的症状以及如何通过采用微服务架构来摆脱它
  • The symptoms of monolithic hell and how to escape it by adopting the microservice architecture
  • 微服务架构的基本特征及其优缺点
  • The essential characteristics of the microservice architecture and its benefits and drawbacks
  • 微服务如何实现大型复杂应用程序的 DevOps 开发风格
  • How microservices enable the DevOps style of development of large, complex applications
  • 微服务架构模式语言以及为什么应该使用它
  • The microservice architecture pattern language and why you should use it

虽然只是周一的午餐时间,但 Food to Go, Inc. (FTGO) 的首席技术官 Mary 已经感到沮丧。她的一天已经开始了 真的很好。上周,她与其他软件架构师和开发人员一起参加了一个出色的会议 了解最新的软件开发技术,包括持续部署和微服务架构。 Mary 还与她在北卡罗来纳 A&T 州立大学的前计算机科学同学会面,并分享了技术领导力 战争故事。这次会议让她感到充满力量,并渴望改进 FTGO 开发软件的方式。

It was only Monday lunchtime, but Mary, the CTO of Food to Go, Inc. (FTGO), was already feeling frustrated. Her day had started off really well. She had spent the previous week with other software architects and developers at an excellent conference learning about the latest software development techniques, including continuous deployment and the microservice architecture. Mary had also met up with her former computer science classmates from North Carolina A&T State and shared technology leadership war stories. The conference had left her feeling empowered and eager to improve how FTGO develops software.

不幸的是,这种感觉很快就消失了。她刚刚在另一个办公室度过了第一个早晨 与高级工程和业务人员的痛苦会面。他们花了两个小时讨论开发团队的原因 将错过另一个关键的发布日期。可悲的是,这种会议在过去变得越来越普遍 几年。尽管采用了敏捷开发,但开发速度却在放缓,几乎不可能满足业务部门的需求 目标。而且,更糟糕的是,似乎没有一个简单的解决方案。

Unfortunately, that feeling had quickly evaporated. She had just spent the first morning back in the office in yet another painful meeting with senior engineering and business people. They had spent two hours discussing why the development team was going to miss another critical release date. Sadly, this kind of meeting had become increasingly common over the past few years. Despite adopting agile, the pace of development was slowing down, making it next to impossible to meet the business’s goals. And, to make matters worse, there didn’t seem to be a simple solution.

这次会议让 Mary 意识到 FTGO 正遭受着单体地狱的困扰,解决办法是采用微服务架构。但是微服务架构和相关的最新技术 大会上描述的软件开发实践感觉像是一个难以捉摸的梦想。玛丽不清楚她是怎么做到的 扑灭当今的火灾,同时改进 FTGO 的软件开发方式。

The conference had made Mary realize that FTGO was suffering from a case of monolithic hell and that the cure was to adopt the microservice architecture. But the microservice architecture and the associated state-of-the-art software development practices described at the conference felt like an elusive dream. It was unclear to Mary how she could fight today’s fires while simultaneously improving the way software was developed at FTGO.

幸运的是,正如您将在本书中学到的那样,有一种方法。但首先,让我们看看 FTGO 面临的问题以及 他们是怎么到达那里的。

Fortunately, as you will learn in this book, there is a way. But first, let’s look at the problems that FTGO is facing and how they got there.

1.1. 缓慢地迈向铁板一块的地狱

1.1. The slow march toward monolithic hell

自 2005 年底推出以来,FTGO 取得了突飞猛进的发展。如今,它是领先的在线食品配送公司之一 在美国。该公司甚至计划向海外扩张,尽管这些计划因延误而处于危险之中 实现必要的功能。

Since its launch in late 2005, FTGO had grown by leaps and bounds. Today, it’s one of the leading online food delivery companies in the United States. The business even plans to expand overseas, although those plans are in jeopardy because of delays in implementing the necessary features.

FTGO 应用程序的核心非常简单。消费者使用 FTGO 网站或移动应用程序下食品订单 在当地餐馆。FTGO 协调配送订单的快递员网络。它还负责向快递员付款 和餐馆。餐厅使用 FTGO 网站编辑菜单和管理订单。该应用程序使用各种 Web 服务,包括用于支付的 Stripe、用于消息收发的 Twilio 和用于电子邮件的 Amazon Simple Email Service (SES)。

At its core, the FTGO application is quite simple. Consumers use the FTGO website or mobile application to place food orders at local restaurants. FTGO coordinates a network of couriers who deliver the orders. It’s also responsible for paying couriers and restaurants. Restaurants use the FTGO website to edit their menus and manage orders. The application uses various web services, including Stripe for payments, Twilio for messaging, and Amazon Simple Email Service (SES) for email.

与许多其他老化的企业应用程序一样,FTGO 应用程序是一个整体,由单个 Java Web 应用程序组成 存档 (WAR) 文件。多年来,它已成为一个大型、复杂的应用程序。尽管 FTGO 开发尽了最大努力 团队,它已成为 Big Ball of Mud 模式 (www.laputan.org/mud/) 的一个例子。引用该模式的作者 Foote 和 Yoder 的话,它是一个“结构杂乱、杂乱无章、草率的胶带和 救生线,意大利面条代码丛林。软件交付的步伐已经放缓。更糟糕的是,FTGO 应用程序 是使用一些越来越过时的框架编写的。FTGO 应用程序表现出单体式 地狱。

Like many other aging enterprise applications, the FTGO application is a monolith, consisting of a single Java Web Application Archive (WAR) file. Over the years, it has become a large, complex application. Despite the best efforts of the FTGO development team, it’s become an example of the Big Ball of Mud pattern (www.laputan.org/mud/). To quote Foote and Yoder, the authors of that pattern, it’s a “haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle.” The pace of software delivery has slowed. To make matters worse, the FTGO application has been written using some increasingly obsolete frameworks. The FTGO application is exhibiting all the symptoms of monolithic hell.

下一节将介绍 FTGO 应用程序的架构。然后,它谈到了整体式架构为何有效 嗯,最初。我们将介绍 FTGO 应用程序如何超越其架构,以及这如何导致单体式 地狱。

The next section describes the architecture of the FTGO application. Then it talks about why the monolithic architecture worked well initially. We’ll get into how the FTGO application has outgrown its architecture and how that has resulted in monolithic hell.

1.1.1. FTGO 应用程序的架构

1.1.1. The architecture of the FTGO application

FTGO 是一个典型的企业 Java 应用程序。图 1.1 显示了它的架构。FTGO 应用程序具有六边形架构,这是一种架构风格,详见 详情见第 2 章。在六边形架构中,应用程序的核心由业务逻辑组成。围绕业务逻辑 是实现 UI 并与外部系统集成的各种适配器。

FTGO is a typical enterprise Java application. Figure 1.1 shows its architecture. The FTGO application has a hexagonal architecture, which is an architectural style described in more detail in chapter 2. In a hexagonal architecture, the core of the application consists of the business logic. Surrounding the business logic are various adapters that implement UIs and integrate with external systems.

图 1.1.FTGO 应用程序具有六边形架构。它由业务逻辑组成,这些逻辑被实现 UI 的适配器包围 以及与外部系统的接口,例如用于支付、消息传递和电子邮件的移动应用程序和云服务。

业务逻辑由模块组成,每个模块都是域对象的集合。模块的示例包括 、 、 和 。有几个适配器与外部系统连接。有些是入站适配器,它们通过调用业务逻辑(包括 和 adapters)来处理请求。其他是出站适配器,它们使业务逻辑能够访问 MySQL 数据库并调用云服务,例如 Twilio 和 Stripe。Order ManagementDelivery ManagementBillingPaymentsREST APIWeb UI

The business logic consists of modules, each of which is a collection of domain objects. Examples of the modules include Order Management, Delivery Management, Billing, and Payments. There are several adapters that interface with the external systems. Some are inbound adapters, which handle requests by invoking the business logic, including the REST API and Web UI adapters. Others are outbound adapters, which enable the business logic to access the MySQL database and invoke cloud services such as Twilio and Stripe.

尽管具有逻辑模块化架构,但 FTGO 应用程序被打包为单个 WAR 文件。该应用程序是 广泛使用的整体式软件体系结构的示例,它将系统构建为单个可执行或可部署组件。如果 FTGO 应用程序 是用 Go 语言 (GoLang) 编写的,它将是一个可执行文件。应用程序的 Ruby 或 NodeJS 版本将 是源代码的单个目录层次结构。整体式架构本身并不是坏事。FTGO 开发人员制作 当他们为应用程序选择整体式架构时,这是一个不错的决定。

Despite having a logically modular architecture, the FTGO application is packaged as a single WAR file. The application is an example of the widely used monolithic style of software architecture, which structures a system as a single executable or deployable component. If the FTGO application were written in the Go language (GoLang), it would be a single executable. A Ruby or NodeJS version of the application would be a single directory hierarchy of source code. The monolithic architecture isn’t inherently bad. The FTGO developers made a good decision when they picked monolithic architecture for their application.

1.1.2. 单体架构的好处

1.1.2. The benefits of the monolithic architecture

在 FTGO 的早期,当应用程序相对较小时,应用程序的单体架构有很多 好处:

In the early days of FTGO, when the application was relatively small, the application’s monolithic architecture had lots of benefits:

  • 易于开发IDE 和其他开发人员工具专注于构建单个应用程序。
  • Simple to developIDEs and other developer tools are focused on building a single application.
  • 易于对应用程序进行彻底的更改您可以更改代码和数据库架构、构建和部署。
  • Easy to make radical changes to the applicationYou can change the code and the database schema, build, and deploy.
  • 易于测试开发人员编写了端到端测试,用于启动应用程序、调用 REST API 并使用 Selenium 测试 UI。
  • Straightforward to testThe developers wrote end-to-end tests that launched the application, invoked the REST API, and tested the UI with Selenium.
  • 易于部署开发人员所要做的就是将 WAR 文件复制到安装了 Tomcat 的服务器上。
  • Straightforward to deployAll a developer had to do was copy the WAR file to a server that had Tomcat installed.
  • 易于扩展FTGO 在负载均衡器后面运行应用程序的多个实例。
  • Easy to scaleFTGO ran multiple instances of the application behind a load balancer.

但是,随着时间的推移,开发、测试、部署和扩展变得更加困难。让我们看看为什么。

Over time, though, development, testing, deployment, and scaling became much more difficult. Let’s look at why.

1.1.3. 生活在铁板一块的地狱中

1.1.3. Living in monolithic hell

不幸的是,正如 FTGO 开发人员所发现的那样,单体架构具有巨大的局限性。成功的应用 与 FTGO 应用程序一样,它习惯于超越整体架构。每个 sprint 都会由 FTGO 开发团队实施 更多的故事,这使得代码库更大。而且,随着公司越来越成功,发展的规模也越来越大 团队稳步壮大。这不仅提高了代码库的增长率,还增加了管理开销。

Unfortunately, as the FTGO developers have discovered, the monolithic architecture has a huge limitation. Successful applications like the FTGO application have a habit of outgrowing the monolithic architecture. Each sprint, the FTGO development team implemented a few more stories, which made the code base larger. Moreover, as the company became more successful, the size of the development team steadily grew. Not only did this increase the growth rate of the code base, it also increased the management overhead.

如图 1.2 所示,曾经小而简单的 FTGO 应用程序多年来已经发展成为一个巨大的单体。同样,小开发 团队现在已经变成了多个 Scrum 团队,每个团队都致力于一个特定的功能领域。由于其 架构,FTGO 处于铁板一块的地狱中。发育缓慢而痛苦。敏捷开发和部署是不可能的。 让我们看看为什么会这样。

As figure 1.2 shows, the once small, simple FTGO application has grown over the years into a monstrous monolith. Similarly, the small development team has now become multiple Scrum teams, each of which works on a particular functional area. As a result of outgrowing its architecture, FTGO is in monolithic hell. Development is slow and painful. Agile development and deployment is impossible. Let’s look at why this has happened.

图 1.2.一个铁板一块的地狱案例。大型 FTGO 开发团队将他们的更改提交到单个源代码存储库。路径 从代码提交到生产是漫长而艰巨的,并且涉及手动测试。FTGO 应用程序庞大、复杂、不可靠、 并且难以维护。

复杂性让开发人员望而生畏

FTGO 应用程序的一个主要问题是它太复杂了。它太大了,任何开发人员都无法完全理解。 因此,修复错误和正确实现新功能变得困难且耗时。错过了截止日期。

A major problem with the FTGO application is that it’s too complex. It’s too large for any developer to fully understand. As a result, fixing bugs and correctly implementing new features have become difficult and time consuming. Deadlines are missed.

更糟糕的是,这种压倒性的复杂性往往是一个螺旋式下降。如果代码库难以理解, 开发人员无法正确进行更改。每次更改都会使代码库逐渐变得更加复杂和难以理解。 前面图 1.1 中所示的干净、模块化的架构并不反映现实。FTGO 正逐渐成为一个可怕的、难以理解的大泥球。

To make matters worse, this overwhelming complexity tends to be a downward spiral. If the code base is difficult to understand, a developer won’t make changes correctly. Each change makes the code base incrementally more complex and harder to understand. The clean, modular architecture shown earlier in figure 1.1 doesn’t reflect reality. FTGO is gradually becoming a monstrous, incomprehensible, big ball of mud.

Mary 记得最近参加了一个会议,在那里她遇到了一位开发人员,他正在编写一个工具来分析依赖关系 在其数百万行代码 (LOC) 应用程序中的数千个 JAR 之间。在当时,那个工具似乎很像 FTGO 可以使用。现在她不太确定了。Mary 怀疑更好的方法是迁移到更适合的架构 到一个复杂的应用程序:微服务。

Mary remembers recently attending a conference where she met a developer who was writing a tool to analyze the dependencies between the thousands of JARs in their multimillion lines-of-code (LOC) application. At the time, that tool seemed like something FTGO could use. Now she’s not so sure. Mary suspects a better approach is to migrate to an architecture that is better suited to a complex application: microservices.

开发缓慢

除了必须与压倒性的复杂性作斗争外,FTGO 开发人员还发现日常开发任务很慢。大型应用程序 使开发人员的 IDE 过载并减慢其速度。构建 FTGO 应用程序需要很长时间。而且,因为它太大了, 应用程序需要很长时间才能启动。因此,edit-build-run-test 循环需要很长时间,这对 生产力。

As well as having to fight overwhelming complexity, FTGO developers find day-to-day development tasks slow. The large application overloads and slows down a developer’s IDE. Building the FTGO application takes a long time. Moreover, because it’s so large, the application takes a long time to start up. As a result, the edit-build-run-test loop takes a long time, which badly impacts productivity.

从提交到部署的道路漫长而艰巨

FTGO 应用程序的另一个问题是,将更改部署到生产环境中是一个漫长而痛苦的过程。团队 通常每月将更新部署到生产环境一次,通常在周五或周六晚上部署。Mary 继续阅读 软件即服务 (SaaS) 应用程序的最新技术是持续部署每天在工作时间多次将更改部署到生产环境中。显然,截至 2011 年,Amazon.com 部署了一个更改 每 11.6 秒投入生产一次,而不会影响用户!对于 FTGO 开发人员来说,更新生产环境超过 每月一次似乎是一个遥不可及的梦想。采用持续部署似乎几乎是不可能的。

Another problem with the FTGO application is that deploying changes into production is a long and painful process. The team typically deploys updates to production once a month, usually late on a Friday or Saturday night. Mary keeps reading that the state-of-the-art for Software-as-a-Service (SaaS) applications is continuous deployment: deploying changes to production many times a day during business hours. Apparently, as of 2011, Amazon.com deployed a change into production every 11.6 seconds without ever impacting the user! For the FTGO developers, updating production more than once a month seems like a distant dream. And adopting continuous deployment seems next to impossible.

FTGO 部分采用了敏捷。工程团队被分成小队,并使用为期两周的冲刺。不幸的是, 从代码完成到在生产环境中运行的过程是漫长而艰巨的。这么多开发人员致力于 相同的代码库是 build 经常处于 unrereasable 状态。当 FTGO 开发人员试图解决这个问题时 问题,他们的尝试导致了漫长而痛苦的合并。因此,一旦团队完成 它的 sprint、长时间的测试和代码稳定随之而来。

FTGO has partially adopted agile. The engineering team is divided into squads and uses two-week sprints. Unfortunately, the journey from code complete to running in production is long and arduous. One problem with so many developers committing to the same code base is that the build is frequently in an unreleasable state. When the FTGO developers tried to solve this problem by using feature branches, their attempt resulted in lengthy, painful merges. Consequently, once a team completes its sprint, a long period of testing and code stabilization follows.

将更改投入生产需要这么长时间的另一个原因是测试需要很长时间。因为代码库是 如此复杂且更改的影响尚不清楚,因此开发人员和持续集成 (CI) 服务器必须运行 整个测试套件。系统的某些部分甚至需要手动测试。诊断和修复 测试失败的原因。因此,完成一个测试周期需要几天时间。

Another reason it takes so long to get changes into production is that testing takes a long time. Because the code base is so complex and the impact of a change isn’t well understood, developers and the Continuous Integration (CI) server must run the entire test suite. Some parts of the system even require manual testing. It also takes a while to diagnose and fix the cause of a test failure. As a result, it takes a couple of days to complete a testing cycle.

扩展很困难

FTGO 团队在扩展其应用程序时也遇到了问题。这是因为不同的应用程序模块具有冲突的资源 要求。例如,餐厅数据存储在一个大型内存数据库中,该数据库最好部署在服务器上 具有大量内存。相比之下,图像处理模块是 CPU 密集型的,最好部署在具有大量 CPU 的服务器上。 因为这些模块是同一应用程序的一部分,所以 FTGO 必须在服务器配置上妥协。

The FTGO team also has problems scaling its application. That’s because different application modules have conflicting resource requirements. The restaurant data, for example, is stored in a large, in-memory database, which is ideally deployed on servers with lots of memory. In contrast, the image processing module is CPU intensive and best deployed on servers with lots of CPU. Because these modules are part of the same application, FTGO must compromise on the server configuration.

交付可靠的单体式架构是一项挑战

FTGO 应用程序的另一个问题是缺乏可靠性。因此,经常出现生产中断。 它不可靠的一个原因是,由于应用程序的大小很大,因此很难彻底测试应用程序。这种缺乏可测试性 意味着 bug 会进入生产环境。更糟糕的是,该应用程序缺乏故障隔离,因为所有模块都在同一个进程中运行。每隔一段时间,一个模块中的错误(例如内存泄漏)就会崩溃 应用程序的所有实例,逐个运行。FTGO 开发人员不喜欢在半夜被传呼,因为 生产中断。商人更喜欢收入的损失,而信任就更少了。

Another problem with the FTGO application is the lack of reliability. As a result, there are frequent production outages. One reason it’s unreliable is that testing the application thoroughly is difficult, due to its large size. This lack of testability means bugs make their way into production. To make matters worse, the application lacks fault isolation, because all modules are running within the same process. Every so often, a bug in one module—for example, a memory leak—crashes all instances of the application, one by one. The FTGO developers don’t enjoy being paged in the middle of the night because of a production outage. The business people like the loss of revenue and trust even less.

被锁定在日益过时的技术堆栈中

FTGO 团队经历的单体地狱的最后一个方面是架构迫使他们使用一种技术 堆栈变得越来越过时。整体式架构使得采用新的框架和语言变得困难。 重写整个整体式应用程序,以便它使用一个新的 更好的技术。因此,开发人员受制于他们在项目开始时所做的技术选择。相当 通常,他们必须维护使用越来越过时的技术堆栈编写的应用程序。

The final aspect of monolithic hell experienced by the FTGO team is that the architecture forces them to use a technology stack that’s becoming increasingly obsolete. The monolithic architecture makes it difficult to adopt new frameworks and languages. It would be extremely expensive and risky to rewrite the entire monolithic application so that it would use a new and presumably better technology. Consequently, developers are stuck with the technology choices they made at the start of the project. Quite often, they must maintain an application written using an increasingly obsolete technology stack.

Spring 框架在向后兼容的同时不断发展,因此理论上 FTGO 可能已经能够升级。 不幸的是,FTGO 应用程序使用的框架版本与较新版本的 Spring 不兼容。发展 团队从未找到时间来升级这些框架。因此,应用程序的主要部分越来越多地使用 过时的框架。此外,FTGO 开发人员希望尝试使用非 JVM 语言,例如 GoLang 和 NodeJS 的 API 中。遗憾的是,这对于整体式应用程序来说是不可能的。

The Spring framework has continued to evolve while being backward compatible, so in theory FTGO might have been able to upgrade. Unfortunately, the FTGO application uses versions of frameworks that are incompatible with newer versions of Spring. The development team has never found the time to upgrade those frameworks. As a result, major parts of the application are written using increasingly out-of-date frameworks. What’s more, the FTGO developers would like to experiment with non-JVM languages such as GoLang and NodeJS. Sadly, that’s not possible with a monolithic application.

1.2. 为什么这本书与您相关

1.2. Why this book is relevant to you

您很可能是开发人员、架构师、CTO 或工程副总裁。您负责的应用程序具有 超出了其整体架构的规模。就像 FTGO 的 Mary 一样,您也在为软件交付而苦苦挣扎,并希望知道如何 逃离巨石地狱。或者,也许您担心您的组织正走在通往铁板一块地狱的道路上,并且您想知道如何操作 在为时已晚之前改变方向。如果你需要逃离或避免铁板一块的地狱,这本书就是为你准备的。

It’s likely that you’re a developer, architect, CTO, or VP of engineering. You’re responsible for an application that has outgrown its monolithic architecture. Like Mary at FTGO, you’re struggling with software delivery and want to know how to escape monolith hell. Or perhaps you fear that your organization is on the path to monolithic hell and you want to know how to change direction before it’s too late. If you need to escape or avoid monolithic hell, this is the book for you.

这本书花了大量时间解释微服务架构概念。我的目标是让您找到这些材料, 无论您使用何种技术堆栈。您只需熟悉企业应用程序架构的基础知识 和设计。特别是,您需要了解以下内容:

This book spends a lot of time explaining microservice architecture concepts. My goal is for you to find this material accessible, regardless of the technology stack you use. All you need is to be familiar with the basics of enterprise application architecture and design. In particular, you need to know the following:

  • 三层架构
  • Three-tier architecture
  • Web 应用程序设计
  • Web application design
  • 如何使用面向对象设计开发业务逻辑
  • How to develop business logic using object-oriented design
  • 如何使用 RDBMS:SQL 和 ACID 事务
  • How to use an RDBMS: SQL and ACID transactions
  • 如何使用消息代理和 REST API 进行进程间通信
  • How to use interprocess communication using a message broker and REST APIs
  • 安全性,包括身份验证和授权
  • Security, including authentication and authorization

本书中的代码示例是使用 Java 和 Spring 框架编写的。这意味着为了充分利用 这些示例,您还需要熟悉 Spring 框架。

The code examples in this book are written using Java and the Spring framework. That means in order to get the most out of the examples, you need to be familiar with the Spring framework too.

1.3. 您将在本书中学到什么

1.3. What you’ll learn in this book

当你读完这本书时,你会明白以下内容:

By the time you finish reading this book you’ll understand the following:

  • 微服务架构的基本特征、优缺点以及何时使用
  • The essential characteristics of the microservice architecture, its benefits and drawbacks, and when to use it
  • 分布式数据管理模式
  • Distributed data management patterns
  • 有效的微服务测试策略
  • Effective microservice testing strategies
  • 微服务的部署选项
  • Deployment options for microservices
  • 将整体式应用程序重构为微服务架构的策略
  • Strategies for refactoring a monolithic application into a microservice architecture

您还可以执行以下操作:

You’ll also be able to do the following:

  • 使用微服务架构模式构建应用程序
  • Architect an application using the microservice architecture pattern
  • 开发服务的业务逻辑
  • Develop the business logic for a service
  • 使用 Sagas 维护跨服务的数据一致性
  • Use sagas to maintain data consistency across services
  • 实现跨服务的查询
  • Implement queries that span services
  • 有效测试微服务
  • Effectively test microservices
  • 开发安全、可配置且可观察的生产就绪型服务
  • Develop production-ready services that are secure, configurable, and observable
  • 将现有整体式应用程序重构为服务
  • Refactor an existing monolithic application to services

1.4. 微服务架构来救援

1.4. Microservice architecture to the rescue

Mary 得出的结论是 FTGO 必须迁移到微服务架构。

Mary has come to the conclusion that FTGO must migrate to the microservice architecture.

有趣的是,软件架构与功能需求几乎没有关系。您可以使用任何架构实施一组使用案例 — 应用程序的功能要求。事实上,这对于成功的应用程序很常见,例如 FTGO 应用程序,是大泥球。

Interestingly, software architecture has very little to do with functional requirements. You can implement a set of use cases—an application’s functional requirements—with any architecture. In fact, it’s common for successful applications, such as the FTGO application, to be big balls of mud.

但是,体系结构很重要,因为它如何影响所谓的服务质量要求,也称为非功能性要求质量属性ilities。随着 FTGO 应用程序的发展,各种质量属性都受到了影响,最明显的是那些影响速度的属性 软件交付:可维护性、可扩展性和可测试性。

Architecture matters, however, because of how it affects the so-called quality of service requirements, also called nonfunctional requirements, quality attributes, or ilities. As the FTGO application has grown, various quality attributes have suffered, most notably those that impact the velocity of software delivery: maintainability, extensibility, and testability.

一方面,一支纪律严明的团队可以放慢它向铁板一块地狱下降的步伐。团队成员可以努力工作 来维护其应用程序的模块化。他们可以编写全面的自动化测试。另一方面,他们不能 避免大型团队处理单个整体式应用程序的问题。他们也无法解决日益严重的 过时的技术堆栈。团队能做的最好的事情就是推迟不可避免的事情。为了逃离铁板一块的地狱,他们必须迁移到 新架构:微服务架构。

On the one hand, a disciplined team can slow down the pace of its descent toward monolithic hell. Team members can work hard to maintain the modularity of their application. They can write comprehensive automated tests. On the other hand, they can’t avoid the issues of a large team working on a single monolithic application. Nor can they solve the problem of an increasingly obsolete technology stack. The best a team can do is delay the inevitable. To escape monolithic hell, they must migrate to a new architecture: the Microservice architecture.

如今,越来越多的共识是,如果您正在构建大型复杂应用程序,则应考虑使用微服务 建筑。但微服务到底是什么?不幸的是,这个名字没有帮助,因为它过分强调大小。微服务有很多定义 建筑。有些人把这个名字理解得太直白了,声称服务应该很小,例如 100 LOC。其他索赔 一项服务应该只需要两周的时间来开发。Adrian Cockcroft 曾供职于 Netflix,他定义了一种微服务架构 作为面向服务的体系结构,由具有边界上下文的松散耦合元素组成。这不是一个糟糕的定义, 但它有点密集。让我们看看我们是否可以做得更好。

Today, the growing consensus is that if you’re building a large, complex application, you should consider using the microservice architecture. But what are microservices exactly? Unfortunately, the name doesn’t help because it overemphasizes size. There are numerous definitions of the microservice architecture. Some take the name too literally and claim that a service should be tiny—for example, 100 LOC. Others claim that a service should only take two weeks to develop. Adrian Cockcroft, formerly of Netflix, defines a microservice architecture as a service-oriented architecture composed of loosely coupled elements that have bounded contexts. That’s not a bad definition, but it is a little dense. Let’s see if we can do better.

1.4.1. 扩展 cube 和微服务

1.4.1. Scale cube and microservices

我对微服务架构的定义受到 Martin Abbott 和 Michael Fisher 的优秀著作 The Art of Scalability (Addison-Wesley, 2015) 的启发。本书描述了一个有用的三维可伸缩性模型:缩放立方体,如图 1.3 所示。

My definition of the microservice architecture is inspired by Martin Abbott and Michael Fisher’s excellent book, The Art of Scalability (Addison-Wesley, 2015). This book describes a useful, three-dimensional scalability model: the scale cube, shown in figure 1.3.

图 1.3.扩展多维数据集定义了三种不同的应用程序扩展方法:X 轴扩展在多个 相同的实例;Z 轴缩放根据请求的属性路由请求;Y 轴在功能上分解 应用程序转换为服务。

该模型定义了三种扩展应用程序的方法:X、Y 和 Z。

The model defines three ways to scale an application: X, Y, and Z.

X 轴扩展可跨多个实例对请求进行负载均衡

X 轴缩放是缩放整体式应用程序的常用方法。图 1.4 显示了 X 轴缩放的工作原理。您在负载均衡器后面运行应用程序的多个实例。负载均衡器将 应用程序的 N 个相同实例之间的请求。这是提高应用程序容量和可用性的好方法。

X-axis scaling is a common way to scale a monolithic application. Figure 1.4 shows how X-axis scaling works. You run multiple instances of the application behind a load balancer. The load balancer distributes requests among the N identical instances of the application. This is a great way of improving the capacity and availability of an application.

图 1.4.X 轴扩展在负载均衡器后面运行整体式应用程序的多个相同实例。

Z 轴缩放根据请求的属性路由请求

Z 轴缩放还运行整体式应用程序的多个实例,但与 X 轴缩放不同的是,每个实例都负责 仅针对数据的子集。图 1.5 显示了 Z 轴缩放的工作原理。实例前面的路由器使用 request 属性将其路由到相应的 实例。例如,应用程序可能会使用 .userId

Z-axis scaling also runs multiple instances of the monolith application, but unlike X-axis scaling, each instance is responsible for only a subset of the data. Figure 1.5 shows how Z-axis scaling works. The router in front of the instances uses a request attribute to route it to the appropriate instance. An application might, for example, route requests using userId.

图 1.5.Z 轴缩放在路由器后面运行整体式应用程序的多个相同实例,该路由器根据属性进行路由。每个实例负责数据的子集。request

在此示例中,每个应用程序实例负责一部分用户。路由器使用请求标头指定的实例来选择应用程序的 N 个相同实例之一。Z 轴缩放是缩放应用程序以处理不断增加的事务的好方法 和数据量。userIdAuthorization

In this example, each application instance is responsible for a subset of users. The router uses the userId specified by the request Authorization header to select one of the N identical instances of the application. Z-axis scaling is a great way to scale an application to handle increasing transaction and data volumes.

Y 轴缩放在功能上将应用程序分解为服务

X 轴和 Z 轴缩放可以提高应用程序的容量和可用性。但这两种方法都无法解决 开发和应用程序复杂性。要解决这些问题,您需要应用 Y 轴缩放或函数分解图 1.6 显示了 Y 轴扩展的工作原理:将整体式应用程序拆分为一组服务。

X- and Z-axis scaling improve the application’s capacity and availability. But neither approach solves the problem of increasing development and application complexity. To solve those, you need to apply Y-axis scaling, or functional decomposition. Figure 1.6 shows how Y-axis scaling works: by splitting a monolithic application into a set of services.

图 1.6.Y 轴缩放将应用程序拆分为一组服务。每个服务都负责一个特定的功能。服务 使用 X 轴缩放(可能使用 Z 轴缩放)进行缩放。

服务是一个微型应用程序,它实现了狭义的功能,例如订单管理、客户管理和 等等。使用 X 轴缩放对服务进行缩放,但某些服务也可能使用 Z 轴缩放。例如,Order service 由一组负载均衡的服务实例组成。

A service is a mini application that implements narrowly focused functionality, such as order management, customer management, and so on. A service is scaled using X-axis scaling, though some services may also use Z-axis scaling. For example, the Order service consists of a set of load-balanced service instances.

微服务架构(microservices)的高级定义是一种在功能上分解的架构样式 将应用程序转换为一组服务。请注意,此定义并未说明大小。相反,重要的是 每个服务都有一组集中的、有凝聚力的职责。在本书的后面,我将讨论这意味着什么。

The high-level definition of microservice architecture (microservices) is an architectural style that functionally decomposes an application into a set of services. Note that this definition doesn’t say anything about size. Instead, what matters is that each service has a focused, cohesive set of responsibilities. Later in the book I discuss what that means.

现在让我们看看微服务架构如何成为一种模块化形式。

Now let’s look at how the microservice architecture is a form of modularity.

1.4.2. 微服务作为模块化的一种形式

1.4.2. Microservices as a form of modularity

开发大型复杂应用程序时,模块化至关重要。像 FTGO 这样的现代应用程序太大,无法开发 一个个体。它也太复杂了,一个人无法理解。应用程序必须分解为模块,这些模块 是由不同的人开发和理解的。在整体式应用程序中,模块是使用 编程语言构造(如 Java 包)和生成工件(如 Java JAR 文件)。然而,随着 FTGO 开发人员发现,这种方法在实践中往往效果不佳。长期存在的整体式应用程序 退化成大泥球。

Modularity is essential when developing large, complex applications. A modern application like FTGO is too large to be developed by an individual. It’s also too complex to be understood by a single person. Applications must be decomposed into modules that are developed and understood by different people. In a monolithic application, modules are defined using a combination of programming language constructs (such as Java packages) and build artifacts (such as Java JAR files). However, as the FTGO developers have discovered, this approach tends not to work well in practice. Long-lived, monolithic applications usually degenerate into big balls of mud.

微服务架构使用服务作为模块化单元。服务具有 API,API 是不可渗透的边界 这很难违反。您不能绕过 API 并访问内部类,就像使用 Java 包一样。因此,保持模块化要容易得多 随时间推移的应用程序。使用服务作为构建块还有其他好处,包括部署能力 并独立扩展它们。

The microservice architecture uses services as the unit of modularity. A service has an API, which is an impermeable boundary that is difficult to violate. You can’t bypass the API and access an internal class as you can with a Java package. As a result, it’s much easier to preserve the modularity of the application over time. There are other benefits of using services as building blocks, including the ability to deploy and scale them independently.

1.4.3. 每个服务都有自己的数据库

1.4.3. Each service has its own database

微服务架构的一个关键特征是服务是松散耦合的,并且仅通过 API 进行通信。 实现松散耦合的一种方法是每个服务都有自己的数据存储。例如,在在线商店中,有一个包含表的数据库,并且具有包含表的数据库。在开发时,开发人员可以更改服务的架构,而无需与开发人员协调 在其他服务上。在运行时,服务彼此隔离,例如,一个服务永远不会被阻止,因为 另一个服务持有数据库锁。Order ServiceORDERSCustomer ServiceCUSTOMERS

A key characteristic of the microservice architecture is that the services are loosely coupled and communicate only via APIs. One way to achieve loose coupling is by each service having its own datastore. In the online store, for example, Order Service has a database that includes the ORDERS table, and Customer Service has its database, which includes the CUSTOMERS table. At development time, developers can change a service’s schema without having to coordinate with developers working on other services. At runtime, the services are isolated from each other—for example, one service will never be blocked because another service holds a database lock.

别担心:松散耦合不会让 Larry Ellison 变得更富有

要求每个服务都有自己的数据库并不意味着它有自己的数据库服务器。例如,您不需要 必须在 Oracle RDBMS 许可证上花费 10 倍。第 2 章深入探讨了这个主题。

The requirement for each service to have its own database doesn’t mean it has its own database server. You don’t, for example, have to spend 10 times more on Oracle RDBMS licenses. Chapter 2 explores this topic in depth.

现在我们已经定义了微服务架构并描述了它的一些基本特征,让我们看看如何 这适用于 FTGO 应用程序。

Now that we’ve defined the microservice architecture and described some of its essential characteristics, let’s look at how this applies to the FTGO application.

1.4.4. FTGO 微服务架构

1.4.4. The FTGO microservice architecture

本书的其余部分深入讨论了 FTGO 应用程序的微服务架构。但首先让我们快速看一下 将 Y 轴缩放应用于此应用程序意味着什么。如果我们将 Y 轴分解应用于 FTGO 应用程序,我们会得到 架构如图 1.7 所示。分解的应用程序由许多前端和后端服务组成。我们还将应用 X 轴,并且可能 Z 轴缩放,以便在运行时每个服务都有多个实例。

The rest of this book discusses the FTGO application’s microservice architecture in depth. But first let’s quickly look at what it means to apply Y-axis scaling to this application. If we apply Y-axis decomposition to the FTGO application, we get the architecture shown in figure 1.7. The decomposed application consists of numerous frontend and backend services. We would also apply X-axis and, possibly Z-axis scaling, so that at runtime there would be multiple instances of each service.

图 1.7.FTGO 应用程序基于微服务架构版本的一些服务。API Gateway 路由请求 从移动应用程序到服务。这些服务通过 API 进行协作。

前端服务包括 API 网关和餐厅 Web UI。API 网关,扮演 Facade 的角色 在第 8 章中详细介绍了 AND EXPRESS 的移动应用程序使用的 REST API。Restaurant Web UI 实现 餐厅用来管理菜单和处理订单的 Web 界面。

The frontend services include an API gateway and the Restaurant Web UI. The API gateway, which plays the role of a facade and is described in detail in chapter 8, provides the REST APIs that are used by the consumers’ and couriers’ mobile applications. The Restaurant Web UI implements the web interface that’s used by the restaurants to manage menus and process orders.

FTGO 应用程序的业务逻辑由许多后端服务组成。每个后端服务都有一个 REST API 及其自己的 private 数据存储。后端服务包括以下内容:

The FTGO application’s business logic consists of numerous backend services. Each backend service has a REST API and its own private datastore. The backend services include the following:

  • 订单服务管理订单
  • Order ServiceManages orders
  • 送货服务管理从餐厅到消费者的订单交付
  • Delivery ServiceManages delivery of orders from restaurants to consumers
  • 餐厅服务维护有关餐厅的信息
  • Restaurant ServiceMaintains information about restaurants
  • 厨房服务管理订单的准备工作
  • Kitchen ServiceManages the preparation of orders
  • 会计服务处理账单和付款
  • Accounting ServiceHandles billing and payments

许多服务对应于本章前面介绍的模块。不同的是,每个服务及其 API 的定义非常明确。每个 API 都可以独立开发、测试、部署和扩展。此外,此体系结构确实 在保持模块化方面做得很好。开发人员无法绕过服务的 API 并访问其内部组件。第 13 章介绍了如何将现有的单体式应用程序转换为微服务。

Many services correspond to the modules described earlier in this chapter. What’s different is that each service and its API are very clearly defined. Each one can be independently developed, tested, deployed, and scaled. Also, this architecture does a good job of preserving modularity. A developer can’t bypass a service’s API and access its internal components. Chapter 13 describes how to transform an existing monolithic application into microservices.

1.4.5. 微服务架构和 SOA 的比较

1.4.5. Comparing the microservice architecture and SOA

一些微服务架构的批评者声称这并不是什么新鲜事,它是面向服务的架构 (SOA)。在非常高的 级别,它们有一些相似之处。SOA 和微服务架构是构建系统的架构样式 作为一组服务。但如表 1.1 所示,一旦深入挖掘,就会遇到显著的差异。

Some critics of the microservice architecture claim it’s nothing new—it’s service-oriented architecture (SOA). At a very high level, there are some similarities. SOA and the microservice architecture are architectural styles that structure a system as a set of services. But as table 1.1 shows, once you dig deep, you encounter significant differences.

表 1.1.比较 SOA 与微服务
 

SOA

SOA

微服务

Microservices

服务间通信 智能管道,如 Enterprise Service Bus,使用重量级协议,如 SOAP 和其他 WS* 标准。 哑管道(如消息代理)或使用轻量级协议(如 REST)的直接服务到服务通信 或 gRPC
数据 全局数据模型和共享数据库 每个服务的数据模型和数据库
典型服务 更大的整体式应用程序 较小的服务

SOA 和微服务架构通常使用不同的技术堆栈。SOA 应用程序通常使用重量级 SOAP 和其他 WS* 标准等技术。他们通常使用 ESB,这是一种包含业务和消息处理逻辑的智能管道来集成服务。使用微服务构建的应用程序 架构倾向于使用轻量级的开源技术。这些服务通过哑管道(如消息代理)或轻量级协议(如 REST 或 gRPC)进行通信。

SOA and the microservice architecture usually use different technology stacks. SOA applications typically use heavyweight technologies such as SOAP and other WS* standards. They often use an ESB, a smart pipe that contains business and message-processing logic to integrate the services. Applications built using the microservice architecture tend to use lightweight, open source technologies. The services communicate via dumb pipes, such as message brokers or lightweight protocols like REST or gRPC.

SOA 和微服务架构在处理数据的方式上也有所不同。SOA 应用程序通常具有全局数据模型 和共享数据库。相比之下,如前所述,在微服务架构中,每个服务都有自己的数据库。 此外,如第 2 章所述,每个服务通常被认为都有自己的域模型。

SOA and the microservice architecture also differ in how they treat data. SOA applications typically have a global data model and share databases. In contrast, as mentioned earlier, in the microservice architecture each service has its own database. Moreover, as described in chapter 2, each service is usually considered to have its own domain model.

SOA 和微服务架构之间的另一个关键区别是服务的大小。SOA 通常用于 集成大型、复杂的整体式应用程序。尽管微服务架构中的服务并不总是很小,但它们确实很小 几乎总是小得多。因此,SOA 应用程序通常由一些大型服务组成,而基于微服务的 应用程序通常由数十或数百个较小的服务组成。

Another key difference between SOA and the microservice architecture is the size of the services. SOA is typically used to integrate large, complex, monolithic applications. Although services in a microservice architecture aren’t always tiny, they’re almost always much smaller. As a result, a SOA application usually consists of a few large services, whereas a microservices-based application typically consists of dozens or hundreds of smaller services.

1.5. 微服务架构的优缺点

1.5. Benefits and drawbacks of the microservice architecture

让我们首先考虑一下好处,然后我们看看缺点。

Let’s first consider the benefits and then we’ll look at the drawbacks.

1.5.1. 微服务架构的优势

1.5.1. Benefits of the microservice architecture

微服务架构具有以下优势:

The microservice architecture has the following benefits:

  • 它支持持续交付和部署大型复杂应用程序。
  • It enables the continuous delivery and deployment of large, complex applications.
  • 服务规模小且易于维护。
  • Services are small and easily maintained.
  • 服务是可独立部署的。
  • Services are independently deployable.
  • 服务可独立扩展。
  • Services are independently scalable.
  • 微服务架构使团队能够实现自主。
  • The microservice architecture enables teams to be autonomous.
  • 它允许轻松试验和采用新技术。
  • It allows easy experimenting and adoption of new technologies.
  • 它具有更好的故障隔离。
  • It has better fault isolation.

让我们看看每个好处。

Let’s look at each benefit.

支持大型复杂应用程序的持续交付和部署

微服务架构最重要的好处是,它支持持续交付和部署大型 复杂的应用程序。如后面的 1.7 节所述,持续交付/部署是 DevOps 的一部分,DevOps 是一组用于快速、频繁和可靠交付软件的实践。高绩效 DevOps 组织通常 将更改部署到生产环境中,几乎没有生产问题。

The most important benefit of the microservice architecture is that it enables continuous delivery and deployment of large, complex applications. As described later in section 1.7, continuous delivery/deployment is part of DevOps, a set of practices for the rapid, frequent, and reliable delivery of software. High-performing DevOps organizations typically deploy changes into production with very few production issues.

微服务架构通过三种方式实现持续交付/部署:

There are three ways that the microservice architecture enables continuous delivery/deployment:

  • 它具有持续交付/部署所需的可测试性自动化测试是持续交付/部署的关键实践。由于微服务架构中的每个服务 相对较小,则自动化测试更容易编写且执行速度更快。因此,应用程序将具有 更少的错误。
  • It has the testability required by continuous delivery/deploymentAutomated testing is a key practice of continuous delivery/deployment. Because each service in a microservice architecture is relatively small, automated tests are much easier to write and faster to execute. As a result, the application will have fewer bugs.
  • 它具有持续交付/部署所需的可部署性每个服务都可以独立于其他服务进行部署。如果负责服务的开发人员需要部署 更改,则不需要与其他开发人员协调。他们可以部署其更改。如 因此,将更改频繁部署到生产环境中要容易得多。
  • It has the deployability required by continuous delivery/deploymentEach service can be deployed independently of other services. If the developers responsible for a service need to deploy a change that’s local to that service, they don’t need to coordinate with other developers. They can deploy their changes. As a result, it’s much easier to deploy changes frequently into production.
  • 它使开发团队能够自治和松散耦合您可以将工程组织构建为小型(例如,两个披萨)团队的集合。每个团队都是 负责一项或多项相关服务的开发和部署。如图 1.8 所示,每个团队都可以独立于所有其他团队开发、部署和扩展他们的服务。因此,开发 速度要高得多。
  • It enables development teams to be autonomous and loosely coupledYou can structure the engineering organization as a collection of small (for example, two-pizza) teams. Each team is solely responsible for the development and deployment of one or more related services. As figure 1.8 shows, each team can develop, deploy, and scale their services independently of all the other teams. As a result, the development velocity is much higher.
图 1.8.基于微服务的 FTGO 应用程序由一组松散耦合的服务组成。每个团队开发、测试和部署 他们的服务独立。

执行持续交付和部署的能力具有几个业务优势:

The ability to do continuous delivery and deployment has several business benefits:

  • 它缩短了上市时间,使企业能够快速响应客户的反馈。
  • It reduces the time to market, which enables the business to rapidly react to feedback from customers.
  • 它使企业能够提供当今客户所期望的那种可靠服务。
  • It enables the business to provide the kind of reliable service today’s customers have come to expect.
  • 员工满意度更高,因为更多的时间花在了提供有价值的功能上,而不是救火上。
  • Employee satisfaction is higher because more time is spent delivering valuable features instead of fighting fires.

因此,微服务架构已成为任何依赖软件技术的企业的筹码。

As a result, the microservice architecture has become the table stakes of any business that depends upon software technology.

每项服务都很小且易于维护

微服务架构的另一个好处是每个服务都相对较小。代码对开发人员来说更容易 来理解。较小的代码库不会减慢 IDE 的速度,从而提高开发人员的工作效率。每项服务通常 启动速度比大型整体式架构快得多,这也提高了开发人员的工作效率并加快了部署速度。

Another benefit of the microservice architecture is that each service is relatively small. The code is easier for a developer to understand. The small code base doesn’t slow down the IDE, making developers more productive. And each service typically starts a lot faster than a large monolith does, which also makes developers more productive and speeds up deployments.

服务可独立扩展

微服务架构中的每个服务都可以使用 X 轴克隆和 Z 轴独立于其他服务进行扩展 分区。此外,每个服务都可以部署在最适合其资源要求的硬件上。这是 与使用整体式架构时完全不同,在整体式架构中,组件具有截然不同的资源需求 — 对于 例如,CPU 密集型与内存密集型 — 必须一起部署。

Each service in a microservice architecture can be scaled independently of other services using X-axis cloning and Z-axis partitioning. Moreover, each service can be deployed on hardware that’s best suited to its resource requirements. This is quite different than when using a monolithic architecture, where components with wildly different resource requirements—for example, CPU-intensive vs. memory-intensive—must be deployed together.

更好的故障隔离

微服务架构具有更好的故障隔离。例如,一个服务中的内存泄漏只会影响该服务。 其他服务将继续正常处理请求。相比之下,整体式架构的一个行为异常的组件 将导致整个系统瘫痪。

The microservice architecture has better fault isolation. For example, a memory leak in one service only affects that service. Other services will continue to handle requests normally. In comparison, one misbehaving component of a monolithic architecture will bring down the entire system.

轻松试验和采用新技术

最后但并非最不重要的一点是,微服务架构消除了对技术堆栈的任何长期承诺。原则上, 在开发新服务时,开发人员可以自由选择最适合该服务的任何语言和框架。在许多组织中,限制选择是有意义的,但关键是你不受过去决策的限制。

Last but not least, the microservice architecture eliminates any long-term commitment to a technology stack. In principle, when developing a new service, the developers are free to pick whatever language and frameworks are best suited for that service. In many organizations, it makes sense to restrict the choices, but the key point is that you aren’t constrained by past decisions.

此外,由于服务规模较小,因此使用更好的语言和技术重写它们变得非常实用。如果 新技术的试用失败,您可以丢弃该工作而不会冒整个项目的风险。这是完全不同的 比使用整体式架构时要多,在整体式架构中,您的初始技术选择严重限制了您使用不同 未来的语言和框架。

Moreover, because the services are small, rewriting them using better languages and technologies becomes practical. If the trial of a new technology fails, you can throw away that work without risking the entire project. This is quite different than when using a monolithic architecture, where your initial technology choices severely constrain your ability to use different languages and frameworks in the future.

1.5.2. 微服务架构的缺点

1.5.2. Drawbacks of the microservice architecture

当然,没有技术是灵丹妙药,微服务架构有许多重大的缺点和问题。 事实上,这本书的大部分内容都是关于如何解决这些缺点和问题。当您阅读有关挑战的信息时,请不要担心。 在本书的后面,我将介绍解决这些问题的方法。

Certainly, no technology is a silver bullet, and the microservice architecture has a number of significant drawbacks and issues. Indeed most of this book is about how to address these drawbacks and issues. As you read about the challenges, don’t worry. Later in this book I describe ways to address them.

以下是微服务架构的主要缺点和问题:

Here are the major drawbacks and issues of the microservice architecture:

  • 找到合适的服务集是一项挑战。
  • Finding the right set of services is challenging.
  • 分布式系统很复杂,这使得开发、测试和部署变得困难。
  • Distributed systems are complex, which makes development, testing, and deployment difficult.
  • 部署跨多个服务的功能需要仔细协调。
  • Deploying features that span multiple services requires careful coordination.
  • 决定何时采用微服务架构很困难。
  • Deciding when to adopt the microservice architecture is difficult.

让我们依次看看每一个。

Let’s look at each one in turn.

寻找合适的服务是一项挑战

使用微服务架构的一个挑战是,没有具体、定义明确的分解算法 一个系统进入服务。与许多软件开发一样,这是一门艺术。更糟糕的是,如果你分解 一个系统错误地构建了一个分布式 Monolith,一个由必须一起部署的耦合服务组成的系统。分布式单体式应用同时具有这两种缺点 整体式架构和微服务架构。

One challenge with using the microservice architecture is that there isn’t a concrete, well-defined algorithm for decomposing a system into services. As with much of software development, it’s something of an art. To make matters worse, if you decompose a system incorrectly, you’ll build a distributed monolith, a system consisting of coupled services that must be deployed together. A distributed monolith has the drawbacks of both the monolithic architecture and the microservice architecture.

分布式系统很复杂

使用微服务架构的另一个问题是,开发人员必须处理创建 分布式系统。服务必须使用进程间通信机制。这比简单的方法更复杂 叫。此外,服务必须设计为处理部分故障并处理远程服务不可用 或表现出高延迟。

Another issue with using the microservice architecture is that developers must deal with the additional complexity of creating a distributed system. Services must use an interprocess communication mechanism. This is more complex than a simple method call. Moreover, a service must be designed to handle partial failure and deal with the remote service either being unavailable or exhibiting high latency.

实现跨多个服务的用例需要使用不熟悉的技术。每个服务都有自己的数据库 这使得实现跨服务的事务和查询成为一项挑战。如第 4 章所述,基于微服务的应用程序必须使用所谓的 sagas 来维护服务之间的数据一致性。第 7 章介绍了基于微服务的应用程序无法使用简单查询从多个服务中检索数据。相反 它必须使用 API 组合或 CQRS 视图实现查询。

Implementing use cases that span multiple services requires the use of unfamiliar techniques. Each service has its own database, which makes it a challenge to implement transactions and queries that span services. As described in chapter 4, a microservices-based application must use what are known as sagas to maintain data consistency across services. Chapter 7 explains that a microservices-based application can’t retrieve data from multiple services using simple queries. Instead, it must implement queries using either API composition or CQRS views.

IDE 和其他开发工具专注于构建整体式应用程序,不提供对开发的明确支持 分布式应用程序。编写涉及多个服务的自动化测试具有挑战性。这些都是 特定于微服务架构。因此,您组织的开发人员必须拥有复杂的软件 成功使用微服务的开发和交付技能。

IDEs and other development tools are focused on building monolithic applications and don’t provide explicit support for developing distributed applications. Writing automated tests that involve multiple services is challenging. These are all issues that are specific to the microservice architecture. Consequently, your organization’s developers must have sophisticated software development and delivery skills in order to successfully use microservices.

微服务架构还引入了显著的操作复杂性。更多移动部件 - 多个实例 的 -- 必须在生产中进行管理。要成功部署微服务,您需要高水平的 自动化。您必须使用如下技术:

The microservice architecture also introduces significant operational complexity. Many more moving parts—multiple instances of different types of service—must be managed in production. To successfully deploy microservices, you need a high level of automation. You must use technologies such as the following:

  • 自动化部署工具,如 Netflix Spinnaker
  • Automated deployment tooling, like Netflix Spinnaker
  • 现成的 PaaS,如 Pivotal Cloud Foundry 或 Red Hat OpenShift
  • An off-the-shelf PaaS, like Pivotal Cloud Foundry or Red Hat OpenShift
  • Docker 编排平台,如 Docker Swarm 或 Kubernetes
  • A Docker orchestration platform, like Docker Swarm or Kubernetes

我在第 12 章中更详细地介绍了部署选项。

I describe the deployment options in more detail in chapter 12.

部署跨多个服务的功能需要仔细协调

使用微服务架构的另一个挑战是,部署跨多个服务的功能需要 各个开发团队之间的仔细协调。您必须创建一个对服务部署进行排序的推出计划 基于服务之间的依赖关系。这与整体式架构完全不同,在整体式架构中,您可以轻松部署 以原子方式更新多个组件。

Another challenge with using the microservice architecture is that deploying features that span multiple services requires careful coordination between the various development teams. You have to create a rollout plan that orders service deployments based on the dependencies between services. That’s quite different than a monolithic architecture, where you can easily deploy updates to multiple components atomically.

决定何时收养很困难

使用微服务架构的另一个问题是决定在应用程序生命周期的哪个时间点 应该使用此架构。在开发应用程序的第一个版本时,您通常不会遇到以下问题: 这种架构解决了。此外,使用精心设计的分布式架构会减慢开发速度。那可以是 初创公司的主要困境,其中最大的问题通常是如何快速发展商业模式并伴随 应用。使用微服务架构会使快速迭代变得更加困难。一家初创公司应该几乎 当然,从整体式应用程序开始。

Another issue with using the microservice architecture is deciding at what point during the lifecycle of the application you should use this architecture. When developing the first version of an application, you often don’t have the problems that this architecture solves. Moreover, using an elaborate, distributed architecture will slow down development. That can be a major dilemma for startups, where the biggest problem is usually how to rapidly evolve the business model and accompanying application. Using the microservice architecture makes it much more difficult to iterate rapidly. A startup should almost certainly begin with a monolithic application.

但是,稍后,当问题在于如何处理复杂性时,就需要对应用程序进行功能分解 转换为一组微服务。由于依赖项纠缠不清,您可能会发现重构很困难。第 13 章介绍了将整体式应用程序重构为微服务的策略。

Later on, though, when the problem is how to handle complexity, that’s when it makes sense to functionally decompose the application into a set of microservices. You may find refactoring difficult because of tangled dependencies. Chapter 13 goes over strategies for refactoring a monolithic application into microservices.

如您所见,微服务架构提供了许多好处,但也有一些明显的缺点。因为这些 问题,采用微服务架构不应轻易进行。但对于复杂的应用程序,例如面向消费者的 Web 应用程序或 SaaS 应用程序,这通常是正确的选择。eBay (www.slideshare.net/RandyShoup/the-ebay-architecture-striking-a-balance-between-site-stability-feature-velocity-performance-and-cost)、Amazon.com、Groupon 和 Gilt 等知名网站都已从单体架构演变为微服务架构。

As you can see, the microservice architecture offer many benefits, but also has some significant drawbacks. Because of these issues, adopting a microservice architecture should not be undertaken lightly. But for complex applications, such as a consumer-facing web application or SaaS application, it’s usually the right choice. Well-known sites like eBay (www.slideshare.net/RandyShoup/the-ebay-architecture-striking-a-balance-between-site-stability-feature-velocity-performance-and-cost), Amazon.com, Groupon, and Gilt have all evolved from a monolithic architecture to a microservice architecture.

使用微服务架构时,您必须解决许多设计和架构问题。更重要的是,其中许多 问题有多种解决方案,每种解决方案都有一组不同的权衡。没有一个单一的完美解决方案。帮助 指导您做出决策,我已经创建了微服务架构模式语言。我引用了这个模式语言 在本书的其余部分,我将向您介绍微服务架构。让我们看看什么是模式语言 是以及为什么它有帮助。

You must address numerous design and architectural issues when using the microservice architecture. What’s more, many of these issues have multiple solutions, each with a different set of trade-offs. There is no one single perfect solution. To help guide your decision making, I’ve created the Microservice architecture pattern language. I reference this pattern language throughout the rest of the book as I teach you about the microservice architecture. Let’s look at what a pattern language is and why it’s helpful.

1.6. 微服务架构模式语言

1.6. The Microservice architecture pattern language

架构和设计都是关于做出决策的。您需要确定是整体式架构还是微服务架构 最适合您的应用程序。在做出这些决定时,您需要考虑很多权衡。如果您选择 微服务架构,您需要解决很多问题。

Architecture and design are all about making decisions. You need to decide whether the monolithic or microservice architecture is the best fit for your application. When making these decisions you have lots of trade-offs to consider. If you pick the microservice architecture, you’ll need to address lots of issues.

描述各种架构和设计选项并改进决策的一个好方法是使用模式语言。 首先,让我们看看为什么需要模式和模式语言,然后我们将浏览微服务架构 pattern 语言。

A good way to describe the various architectural and design options and improve decision making is to use a pattern language. Let’s first look at why we need patterns and a pattern language, and then we’ll take a tour of the Microservice architecture pattern language.

1.6.1. 微服务架构不是灵丹妙药

1.6.1. Microservice architecture is not a silver bullet

早在 1986 年,《The Mythical Man-Month》(Addison-Wesley Professional,1995 年)的作者 Fred Brooks 就说,在软件工程中,没有灵丹妙药。这意味着有 没有技术或技术,如果采用,您的生产力将提高十倍。然而几十年后,开发商 还在为他们最喜欢的银弹激情争论,绝对相信他们最喜欢的技术会 大幅提升他们的工作效率。

Back in 1986, Fred Brooks, author of The Mythical Man-Month (Addison-Wesley Professional, 1995), said that in software engineering, there are no silver bullets. That means there are no techniques or technologies that if adopted would give you a tenfold boost in productivity. Yet decades years later, developers are still arguing passionately about their favorite silver bullets, absolutely convinced that their favorite technology will give them a massive boost in productivity.

很多争论都遵循 suck/rock 二分法 (http://nealford.com/memeagora/2009/08/05/suck-rock-dichotomy.html),这是 Neal Ford 创造的一个术语,它描述了软件世界中的一切事物如何要么糟糕,要么摇滚,没有中间地带。 这些参数具有这样的结构:如果你执行 X,那么一只小狗会死,因此你必须执行 Y。例如,同步 与反应式编程、面向对象与函数式、Java 与 JavaScript、REST 与消息传递。答案是肯定的 现实要微妙得多。每种技术都有缺点和局限性,而这些缺点和局限性往往被其倡导者所忽视。如 因此,一项技术的采用通常遵循 Gartner 炒作周期https://en.wikipedia.org/wiki/Hype_cycle),其中一项新兴技术会经历五个阶段,包括期望过高的高峰(它摇晃),然后是幻灭的低谷(它很糟糕),最后是生产力的平台期(我们现在了解了权衡以及何时使用它)。

A lot of arguments follow the suck/rock dichotomy (http://nealford.com/memeagora/2009/08/05/suck-rock-dichotomy.html), a term coined by Neal Ford that describes how everything in the software world either sucks or rocks, with no middle ground. These arguments have this structure: if you do X, then a puppy will die, so therefore you must do Y. For example, synchronous versus reactive programming, object-oriented versus functional, Java versus JavaScript, REST versus messaging. Of course, reality is much more nuanced. Every technology has drawbacks and limitations that are often overlooked by its advocates. As a result, the adoption of a technology usually follows the Gartner hype cycle (https://en.wikipedia.org/wiki/Hype_cycle), in which an emerging technology goes through five phases, including the peak of inflated expectations (it rocks), followed by the trough of disillusionment (it sucks), and ending with the plateau of productivity (we now understand the trade-offs and when to use it).

微服务也不能幸免于银弹现象。此体系结构是否适合您的应用程序 取决于许多因素。因此,建议始终使用微服务架构是一个糟糕的建议,但同样 建议永远不要使用它的坏建议。与许多事情一样,这要视情况而定。

Microservices are not immune to the silver bullet phenomenon. Whether this architecture is appropriate for your application depends on many factors. Consequently, it’s bad advice to advise always using the microservice architecture, but it’s equally bad advice to advise never using it. As with many things, it depends.

这些关于技术的两极分化和炒作的争论的根本原因是,人类主要是由他们的 情绪。乔纳森·海特 (Jonathan Haidt) 在他的优秀著作《正义的心灵:为什么好人被政治和宗教分裂》(The Righteous Mind: Why Good People Are Divided by Politics and Religion,Vintage,2013 年)中,用大象和骑手的比喻来描述人类的思维是如何运作的。大象代表 人脑的情感部分。它做出了大部分决定。骑手代表大脑的理性部分。 它有时可以影响大象,但主要为大象的决定提供理由。

The underlying reason for these polarized and hyped arguments about technology is that humans are primarily driven by their emotions. Jonathan Haidt, in his excellent book The Righteous Mind: Why Good People Are Divided by Politics and Religion (Vintage, 2013), uses the metaphor of an elephant and its rider to describe how the human mind works. The elephant represents the emotion part of the human brain. It makes most of the decisions. The rider represents the rational part of the brain. It can sometimes influence the elephant, but it mostly provides justifications for the elephant’s decisions.

我们 — 软件开发社区 — 需要克服我们的情绪本性,并找到一种更好的讨论和应用方式 科技。讨论和描述技术的一种好方法是使用 pattern 格式,因为它是客观的。例如,在以模式格式描述技术时,必须描述其缺点。 我们来看看 pattern 格式。

We—the software development community—need to overcome our emotional nature and find a better way of discussing and applying technology. A great way to discuss and describe technology is to use the pattern format, because it’s objective. When describing a technology in the pattern format, you must, for example, describe the drawbacks. Let’s take a look at the pattern format.

1.6.2. 模式和模式语言

1.6.2. Patterns and pattern languages

模式是针对特定上下文中出现的问题的可重用解决方案。这个想法起源于现实世界 架构,这已被证明在软件架构和设计中很有用。模式的概念是由 Christopher Alexander,一位现实世界的建筑师。他还创建了模式语言的概念,这是解决特定领域内问题的相关模式的集合。他的著作 A Pattern Language: Towns, Buildings, Construction (Oxford University Press, 1977) 描述了一种由 253 种模式组成的建筑模式语言。模式 范围从高级问题的解决方案,例如城市的位置(“获得水”),到低级问题,例如 如如何设计房间(“每个房间两侧的灯”)。这些模式中的每一种都通过安排物理 范围从 cities 到 Windows 的对象。

A pattern is a reusable solution to a problem that occurs in a particular context. It’s an idea that has its origins in real-world architecture and that has proven to be useful in software architecture and design. The concept of a pattern was created by Christopher Alexander, a real-world architect. He also created the concept of a pattern language, a collection of related patterns that solve problems within a particular domain. His book A Pattern Language: Towns, Buildings, Construction (Oxford University Press, 1977) describes a pattern language for architecture that consists of 253 patterns. The patterns range from solutions to high-level problems, such as where to locate a city (“Access to water”), to low-level problems, such as how to design a room (“Light on two sides of every room”). Each of these patterns solves a problem by arranging physical objects that range in scope from cities to windows.

Christopher Alexander 的著作启发了软件社区采用模式和模式语言的概念。这 由 Erich Gamma、Richard Helm、Ralph Johnson 和 John Vlissides 合著的《设计模式:可重用面向对象的软件的元素》(Addison-Wesley Professional,1994 年)一书是面向对象的集合 设计模式。这本书在软件开发人员中普及了模式。自 1990 年代中期以来,软件开发人员已经记录了 众多软件模式。软件模式通过定义一组协作软件元素来解决软件体系结构或设计问题。

Christopher Alexander’s writings inspired the software community to adopt the concept of patterns and pattern languages. The book Design Patterns: Elements of Reusable Object-Oriented Software (Addison-Wesley Professional, 1994), by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides is a collection of object-oriented design patterns. The book popularized patterns among software developers. Since the mid-1990s, software developers have documented numerous software patterns. A software pattern solves a software architecture or design problem by defining a set of collaborating software elements.

例如,假设您正在构建一个必须支持各种透支策略的银行应用程序。 每个策略都定义了账户余额的限制和透支账户收取的费用。你可以解决这个问题 问题,这是经典 Design Patterns 书籍中众所周知的模式。Strategy 模式定义的解决方案由三个部分组成:

Let’s imagine, for example, that you’re building a banking application that must support a variety of overdraft policies. Each policy defines limits on the balance of an account and the fees charged for an overdrawn account. You can solve this problem using the Strategy pattern, which is a well-known pattern from the classic Design Patterns book. The solution defined by the Strategy pattern consists of three parts:

  • 一个 strategy 接口,用于封装透支算法Overdraft
  • A strategy interface called Overdraft that encapsulates the overdraft algorithm
  • 一个或多个具体策略类,每个特定上下文一个
  • One or more concrete strategy classes, one for each particular context
  • 使用算法的类Account
  • The Account class that uses the algorithm

Strategy 模式是一种面向对象的设计模式,因此解决方案的元素是类。在本节的后面部分,我将介绍高级设计模式,其中解决方案由协作服务组成。

The Strategy pattern is an object-oriented design pattern, so the elements of the solution are classes. Later in this section, I describe high-level design patterns, where the solution consists of collaborating services.

模式有价值的一个原因是,模式必须描述它所应用的上下文。这个想法 解决方案特定于特定环境,在其他环境中可能效果不佳,这是对技术方式的改进 过去通常会被讨论。例如,以 Netflix 的规模解决问题的解决方案可能不是最好的 方法。

One reason why patterns are valuable is because a pattern must describe the context within which it applies. The idea that a solution is specific to a particular context and might not work well in other contexts is an improvement over how technology used to typically be discussed. For example, a solution that solves the problem at the scale of Netflix might not be the best approach for an application with fewer users.

但是,模式的价值远不止要求您考虑问题的上下文。它迫使你描述 解决方案的其他关键但经常被忽视的方面。常用的模式结构包括三个特别 有价值的部分:

The value of a pattern, however, goes far beyond requiring you to consider the context of a problem. It forces you to describe other critical yet frequently overlooked aspects of a solution. A commonly used pattern structure includes three especially valuable sections:

  • 力量
  • Forces
  • 生成的上下文
  • Resulting context
  • 相关模式
  • Related patterns

让我们看看这些中的每一个,从 forces 开始。

Let’s look at each of these, starting with forces.

力:解决问题时必须解决的问题

模式的 forces 部分描述了在给定上下文中解决问题时必须解决的力(问题)。力量 可能会发生冲突,因此可能无法解决所有问题。哪些力量更重要取决于具体情况。你 必须优先解决某些力量而不是其他力量。例如,代码必须易于理解并具有良好的性能。 以反应式方式编写的代码比同步代码具有更好的性能,但通常更难理解。 明确列出这些力量是有用的,因为它清楚地表明了哪些问题需要解决。

The forces section of a pattern describes the forces (issues) that you must address when solving a problem in a given context. Forces can conflict, so it might not be possible to solve all of them. Which forces are more important depends on the context. You have to prioritize solving some forces over others. For example, code must be easy to understand and have good performance. Code written in a reactive style has better performance than synchronous code, yet is often more difficult to understand. Explicitly listing the forces is useful because it makes clear which issues need to be solved.

生成的上下文:应用模式的后果

模式的结果 context 部分描述了应用该模式的后果。它由三个部分组成:

The resulting context section of a pattern describes the consequences of applying the pattern. It consists of three parts:

  • 优势模式的好处,包括已解决的力
  • BenefitsThe benefits of the pattern, including the forces that have been resolved
  • 缺点该模式的缺点,包括未解决的力
  • DrawbacksThe drawbacks of the pattern, including the unresolved forces
  • 问题应用模式
  • IssuesThe new problems that have been introduced by applying the pattern

生成的上下文提供了更完整且偏差更小的解决方案视图,从而有助于做出更好的设计决策。

The resulting context provides a more complete and less biased view of the solution, which enables better design decisions.

相关模式:五种不同类型的关系

模式的 related patterns 部分描述了该模式与其他模式之间的关系。有五种类型的关系 模式之间:

The related patterns section of a pattern describes the relationship between the pattern and other patterns. There are five types of relationships between patterns:

  • 前任前置模式是激发对此模式需求的模式。例如,微服务架构模式 是模式语言中其余模式(整体式体系结构模式除外)的前身。
  • PredecessorA predecessor pattern is a pattern that motivates the need for this pattern. For example, the Microservice architecture pattern is the predecessor to the rest of the patterns in the pattern language, except the monolithic architecture pattern.
  • 继任者 - 一种模式,用于解决此模式引入的问题。例如,如果您将微服务架构 模式,则必须应用多个后续模式,包括服务发现模式和断路器模式。
  • SuccessorA pattern that solves an issue that has been introduced by this pattern. For example, if you apply the Microservice architecture pattern, you must then apply numerous successor patterns, including service discovery patterns and the Circuit breaker pattern.
  • 备选方案 - 提供此模式的替代解决方案的模式。例如,整体式架构模式和 微服务架构模式是构建应用程序的替代方法。你选择一个或另一个。
  • AlternativeA pattern that provides an alternative solution to this pattern. For example, the Monolithic architecture pattern and the Microservice architecture pattern are alternative ways of architecting an application. You pick one or the other.
  • 泛化作为问题的一般解决方案的模式。例如,在第 12 章中,您将了解 Single service per host 模式的不同实现。
  • GeneralizationA pattern that is a general solution to a problem. For example, in chapter 12 you’ll learn about the different implementations of the Single service per host pattern.
  • 专业化特定模式的专门形式。例如,在第 12 章中,您将了解到将服务部署为容器模式是 Single service per host 的专用化。
  • SpecializationA specialized form of a particular pattern. For example, in chapter 12 you’ll learn that the Deploy a service as a container pattern is a specialization of Single service per host.

此外,您还可以将解决特定问题区域中问题的模式组织成组。显式描述 的相关模式提供了有关如何有效解决特定问题的宝贵指导。图 1.9 显示了如何直观地表示模式之间的关系。

In addition, you can organize patterns that tackle issues in a particular problem area into groups. The explicit description of related patterns provides valuable guidance on how to effectively solve a particular problem. Figure 1.9 shows how the relationships between patterns is visually represented.

图 1.9.模式之间不同类型关系的可视化表示:后继模式解决通过应用前导模式创建的问题;两个或多个模式可以是同一问题的替代解决方案;一个模式可以是另一个模式的特化;解决同一区域中问题的模式可以分组或泛化

图 1.9 中所示的模式之间的不同类型关系表示如下:

The different kinds of relationships between patterns shown in figure 1.9 are represented as follows:

  • 表示前置任务-后续任务关系
  • Represents the predecessor-successor relationship
  • 作为同一问题的替代解决方案的模式
  • Patterns that are alternative solutions to the same problem
  • 指示一个模式是另一个模式的专用化
  • Indicates that one pattern is a specialization of another pattern
  • 适用于特定问题区域的模式
  • Patterns that apply to a particular problem area

通过这些关系相关的模式集合有时会形成所谓的模式语言。模式 在模式中,语言协同工作以解决特定域中的问题。具体而言,我创建了微服务 架构模式语言。它是微服务的相互关联的软件架构和设计模式的集合。 让我们看一下这种模式语言。

A collection of patterns related through these relationships sometimes form what is known as a pattern language. The patterns in a pattern language work together to solve problems in a particular domain. In particular, I’ve created the Microservice architecture pattern language. It’s a collection of interrelated software architecture and design patterns for microservices. Let’s take a look at this pattern language.

1.6.3. 微服务架构模式语言概述

1.6.3. Overview of the Microservice architecture pattern language

微服务架构模式语言是一组模式,可帮助您使用 微服务架构。图 1.10 显示了模式语言的高级结构。模式语言首先帮助你决定是否使用微服务 建筑。它介绍了整体式架构和微服务架构,以及它们的优缺点。 然后,如果微服务架构非常适合您的应用程序,模式语言可以帮助您有效地使用它 通过解决各种架构和设计问题。

The Microservice architecture pattern language is a collection of patterns that help you architect an application using the microservice architecture. Figure 1.10 shows the high-level structure of the pattern language. The pattern language first helps you decide whether to use the microservice architecture. It describes the monolithic architecture and the microservice architecture, along with their benefits and drawbacks. Then, if the microservice architecture is a good fit for your application, the pattern language helps you use it effectively by solving various architecture and design issues.

图 1.10.微服务架构模式语言的高级视图,显示了模式的不同问题区域 解决。左侧是应用程序架构模式:整体式架构和微服务架构。所有 其他模式组解决了因选择 Microservice 架构模式而导致的问题。

模式语言由几组模式组成。图 1.10 的左侧是应用程序架构模式组、整体架构模式和微服务架构模式。 这些就是我们在本章中讨论的模式。模式语言的其余部分由模式组组成,这些模式是所引入问题的解决方案 通过使用微服务架构模式。

The pattern language consists of several groups of patterns. On the left in figure 1.10 is the application architecture patterns group, the Monolithic architecture pattern and the Microservice architecture pattern. Those are the patterns we’ve been discussing in this chapter. The rest of the pattern language consists of groups of patterns that are solutions to issues that are introduced by using the Microservice architecture pattern.

这些模式也分为三层:

The patterns are also divided into three layers:

  • 基础架构模式这些解决方案解决的问题主要是开发之外的基础设施问题。
  • Infrastructure patternsThese solve problems that are mostly infrastructure issues outside of development.
  • 应用程序基础架构这些是针对同样影响开发的基础设施问题。
  • Application infrastructureThese are for infrastructure issues that also impact development.
  • 应用程序模式 - 这些解决方案解决了开发人员面临的问题。
  • Application patternsThese solve problems faced by developers.

这些模式根据它们解决的问题类型进行分组。我们来看一下主要的形态组。

These patterns are grouped together based on the kind of problem they solve. Let’s look at the main groups of patterns.

用于将应用程序分解为服务的模式

决定如何将系统分解为一组服务在很大程度上是一门艺术,但有许多策略可以 帮助。图 1.11 中所示的两种分解模式是可用于定义应用程序架构的不同策略。

Deciding how to decompose a system into a set of services is very much an art, but there are a number of strategies that can help. The two decomposition patterns shown in figure 1.11 are different strategies you can use to define your application’s architecture.

图 1.11.有两种分解模式:Decompose by business capability,它围绕业务功能组织服务, 以及 Decompose by subdomain,它围绕域驱动设计 (DDD) 子域组织服务。

第 2 章详细介绍了这些模式。

Chapter 2 describes these patterns in detail.

通信模式

使用微服务架构构建的应用程序是分布式系统。因此,进程间通信 (IPC) 是微服务架构的重要组成部分。您必须做出各种架构和设计决策 了解您的服务如何相互通信以及与外部世界通信。图 1.12 显示了通信模式,这些模式分为五组:

An application built using the microservice architecture is a distributed system. Consequently, interprocess communication (IPC) is an important part of the microservice architecture. You must make a variety of architectural and design decisions about how your services communicate with one another and the outside world. Figure 1.12 shows the communication patterns, which are organized into five groups:

  • 沟通方式您应该使用哪种 IPC 机制?
  • Communication styleWhat kind of IPC mechanism should you use?
  • 发现 - 服务的客户端如何确定服务实例的 IP 地址,以便发出 HTTP 请求等?
  • DiscoveryHow does a client of a service determine the IP address of a service instance so that, for example, it makes an HTTP request?
  • 可靠性如何确保服务之间的通信可靠,即使服务不可用?
  • ReliabilityHow can you ensure that communication between services is reliable even though services can be unavailable?
  • 事务性消息传递您应该如何将消息的发送和事件发布与更新业务的数据库事务集成 数据?
  • Transactional messagingHow should you integrate the sending of messages and publishing of events with database transactions that update business data?
  • 外部 API应用程序的客户端如何与服务通信?
  • External APIHow do clients of your application communicate with the services?

图 1.12.通信模式的五组

第 3 章介绍了前 4 组模式:通信风格、发现、可靠性和事务消息传递。第 8 章介绍了外部 API 模式。

Chapter 3 looks at the first four groups of patterns: communication style, discovery, reliability, and transaction messaging. Chapter 8 looks at the external API patterns.

用于实施事务管理的数据一致性模式

如前所述,为了保证松耦合,每个服务都有自己的数据库。不幸的是,拥有数据库 Per Service 会带来一些重大问题。我在第 4 章中介绍了使用分布式事务 (2PC) 的传统方法对于现代应用程序来说不是一个可行的选择。相反 应用程序需要使用 Saga 模式来保持数据的一致性。图 1.13 显示了与数据相关的模式。

As mentioned earlier, in order to ensure loose coupling, each service has its own database. Unfortunately, having a database per service introduces some significant issues. I describe in chapter 4 that the traditional approach of using distributed transactions (2PC) isn’t a viable option for a modern application. Instead, an application needs to maintain data consistency by using the Saga pattern. Figure 1.13 shows data-related patterns.

图 1.13.由于每个服务都有自己的数据库,因此您必须使用 Saga 模式来维护服务之间的数据一致性。

第 4 章第 5 章和第 6 章更详细地描述了这些模式。

Chapters 4, 5, and 6 describe these patterns in more detail.

在微服务架构中查询数据的模式

为每个服务使用一个数据库的另一个问题是,某些查询需要联接多个服务拥有的数据。 服务的数据只能通过其 API 访问,因此您不能对其数据库使用分布式查询。图 1.14 显示了可用于实现查询的几种模式。

The other issue with using a database per service is that some queries need to join data that’s owned by multiple services. A service’s data is only accessible via its API, so you can’t use distributed queries against its database. Figure 1.14 shows a couple of patterns you can use to implement queries.

图 1.14.由于每个服务都有自己的数据库,因此您必须使用其中一种查询模式来检索分散在多个服务中的数据 服务业。

有时,您可以使用 API 组合模式,该模式调用一个或多个服务的 API 并聚合结果。 其他时候,您必须使用命令查询责任分离 (CQRS) 模式,该模式可以更轻松地维护一个或多个 查询数据的副本。第 7 章着眼于实现查询的不同方法。

Sometimes you can use the API composition pattern, which invokes the APIs of one or more services and aggregates results. Other times, you must use the Command query responsibility segregation (CQRS) pattern, which maintains one or more easily queried replicas of the data. Chapter 7 looks at the different ways of implementing queries.

服务部署模式

部署整体式应用程序并不总是那么容易,但从只有一个应用程序的角度来看,它很简单 进行部署。您必须在负载均衡器后面运行应用程序的多个实例。

Deploying a monolithic application isn’t always easy, but it is straightforward in the sense that there is a single application to deploy. You have to run multiple instances of the application behind a load balancer.

相比之下,部署基于微服务的应用程序要复杂得多。可能有数十或数百种服务 这些语言和框架编写的。还有更多的活动部分需要管理。图 1.15 显示了部署模式。

In comparison, deploying a microservices-based application is much more complex. There may be tens or hundreds of services that are written in a variety of languages and frameworks. There are many more moving parts that need to be managed. Figure 1.15 shows the deployment patterns.

图 1.15.用于部署微服务的几种模式。传统方法是在特定于语言的打包中部署服务 格式。有两种现代方法可以部署服务。第一个选项将服务部署为 VM 或容器。第二个 是无服务器方法。您只需上传服务的代码,无服务器平台就会运行它。您应该使用 service Deployment Platform,这是一个用于部署和管理服务的自动化自助服务平台。

以特定于语言的打包格式(例如 WAR)部署应用程序的传统方式(通常是手动的) 文件,无法扩展以支持微服务架构。您需要高度自动化的部署基础架构。理想 您应该使用一个部署平台,该平台为开发人员提供一个简单的 UI(命令行或 GUI)来部署和 管理他们的服务。部署平台通常基于虚拟机 (VM)、容器或无服务器 科技。第 12 章介绍了不同的部署选项。

The traditional, and often manual, way of deploying applications in a language-specific packaging format, for example WAR files, doesn’t scale to support a microservice architecture. You need a highly automated deployment infrastructure. Ideally, you should use a deployment platform that provides the developer with a simple UI (command-line or GUI) for deploying and managing their services. The deployment platform will typically be based on virtual machines (VMs), containers, or serverless technology. Chapter 12 looks at the different deployment options.

可观测性模式提供对应用程序行为的洞察

操作应用程序的一个关键部分是了解其运行时行为并排查请求失败等问题 和高延迟。尽管了解整体式应用程序并对其进行故障排除并不总是那么容易,但它有助于满足这些请求 以简单、直接的方式处理。每个传入请求都被负载均衡到一个特定的应用程序实例。 它对数据库进行几次调用并返回响应。例如,如果您需要了解特定请求 已处理,您可以查看处理请求的应用程序实例的日志文件。

A key part of operating an application is understanding its runtime behavior and troubleshooting problems such as failed requests and high latency. Though understanding and troubleshooting a monolithic application isn’t always easy, it helps that requests are handled in a simple, straightforward way. Each incoming request is load balanced to a particular application instance, which makes a few calls to the database and returns a response. For example, if you need to understand how a particular request was handled, you look at the log file of the application instance that handled the request.

相比之下,了解和诊断微服务架构中的问题要复杂得多。请求可以 在多个服务之间来回切换,然后最终将响应返回给客户端。因此,没有一个日志 文件进行检查。同样,延迟问题更难诊断,因为存在多个嫌疑人。

In contrast, understanding and diagnosing problems in a microservice architecture is much more complicated. A request can bounce around between multiple services before a response is finally returned to a client. Consequently, there isn’t one log file to examine. Similarly, problems with latency are more difficult to diagnose because there are multiple suspects.

您可以使用以下模式来设计可观察服务:

You can use the following patterns to design observable services:

  • 运行状况检查 API公开返回服务运行状况的终端节点。
  • Health check APIExpose an endpoint that returns the health of the service.
  • 日志聚合记录服务活动并将日志写入集中式日志记录服务器,该服务器提供搜索和警报。
  • Log aggregationLog service activity and write logs into a centralized logging server, which provides searching and alerting.
  • 分布式跟踪为每个外部请求分配一个唯一的 ID,并在请求在服务之间流动时对其进行跟踪。
  • Distributed tracingAssign each external request a unique ID and trace requests as they flow between services.
  • 异常跟踪 - 向异常跟踪服务报告异常,该服务会删除重复的异常、提醒开发人员并跟踪解决情况 每个异常。
  • Exception trackingReport exceptions to an exception tracking service, which deduplicates exceptions, alerts developers, and tracks the resolution of each exception.
  • 应用程序指标维护指标 (如计数器和仪表),并将其公开给指标服务器。
  • Application metricsMaintain metrics, such as counters and gauges, and expose them to a metrics server.
  • 审计日志记录 - 记录用户操作。
  • Audit loggingLog user actions.

Chapter 11 更详细地描述了这些模式。

Chapter 11 describes these patterns in more detail.

用于服务自动测试的模式

微服务架构使单个服务更易于测试,因为它们比整体式应用程序小得多。 但与此同时,重要的是要测试不同的服务是否可以协同工作,同时避免使用复杂、缓慢、 以及同时测试多个服务的脆弱端到端测试。以下是通过测试服务简化测试的模式 孤立地说:

The microservice architecture makes individual services easier to test because they’re much smaller than the monolithic application. At the same time, though, it’s important to test that the different services work together while avoiding using complex, slow, and brittle end-to-end tests that test multiple services together. Here are patterns for simplifying testing by testing services in isolation:

  • 消费者驱动的契约测试验证服务是否满足其客户的期望。
  • Consumer-driven contract testVerify that a service meets the expectations of its clients.
  • 消费者端合同测试验证服务的客户端是否可以与服务通信。
  • Consumer-side contract testVerify that the client of a service can communicate with the service.
  • 服务组件测试单独测试服务。
  • Service component testTest a service in isolation.

第 9 章和第 10 章更详细地描述了这些测试模式。

Chapters 9 and 10 describe these testing patterns in more detail.

用于处理横切关注点的模式

在微服务架构中,每个服务都必须实现许多关注点,包括可观测性 模式和发现模式。它还必须实现 Externalized Configuration 模式,该模式提供配置 参数,例如运行时服务的数据库凭证。开发新服务时,会太耗时 从头开始重新实现这些问题。更好的方法是应用 Microservice Chassis 模式并构建 服务。Chapter 11 更详细地描述了这些模式。

In a microservice architecture, there are numerous concerns that every service must implement, including the observability patterns and discovery patterns. It must also implement the Externalized Configuration pattern, which supplies configuration parameters such as database credentials to a service at runtime. When developing a new service, it would be too time consuming to reimplement these concerns from scratch. A much better approach is to apply the Microservice Chassis pattern and build services on top of a framework that handles these concerns. Chapter 11 describes these patterns in more detail.

安全模式

在微服务架构中,用户通常由 API 网关进行身份验证。然后,它必须传递有关 用户(例如 Identity 和 Roles)添加到它调用的服务中。一种常见的解决方案是应用 Access token 模式。 API 网关将访问令牌(如 JWT(JSON Web Token))传递给服务,服务可以验证令牌并获取 有关用户的信息。第 11 章更详细地讨论了 Access Token 模式。

In a microservice architecture, users are typically authenticated by the API gateway. It must then pass information about the user, such as identity and roles, to the services it invokes. A common solution is to apply the Access token pattern. The API gateway passes an access token, such as JWT (JSON Web Token), to the services, which can validate the token and obtain information about the user. Chapter 11 discusses the Access token pattern in more detail.

毫不奇怪,微服务架构模式语言中的模式专注于解决架构师和设计问题 问题。您当然需要正确的架构才能成功开发软件,但这并不是唯一的问题。您还必须考虑 process 和 组织。

Not surprisingly, the patterns in the Microservice architecture pattern language are focused on solving architect and design problems. You certainly need the right architecture in order to successfully develop software, but it’s not the only concern. You must also consider process and organization.

1.7. 超越微服务:流程和组织

1.7. Beyond microservices: Process and organization

对于大型复杂应用程序,微服务架构通常是最佳选择。但除了拥有 正确的架构,成功的软件开发还需要您拥有组织、开发和交付流程。图 1.16 显示了流程、组织和架构之间的关系。

For a large, complex application, the microservice architecture is usually the best choice. But in addition to having the right architecture, successful software development requires you to also have organization, and development and delivery processes. Figure 1.16 shows the relationships between process, organization, and architecture.

图 1.16.快速、频繁和可靠地交付大型复杂应用程序需要 DevOps 的组合,其中包括 持续交付/部署、小型自治团队和微服务架构。

我已经介绍了微服务架构。让我们看看组织和过程。

I’ve already described the microservice architecture. Let’s look at organization and process.

1.7.1. 软件开发和交付组织

1.7.1. Software development and delivery organization

成功不可避免地意味着工程团队将不断壮大。一方面,这是一件好事,因为更多的开发人员 可以完成更多工作。正如 Fred Brooks 在 The Mythical Man-Month 中所写的那样,大型团队的麻烦在于,规模为 N 的团队的沟通开销为 ON2).如果团队变得太大,由于通信开销,它将变得效率低下。例如,想象一下,尝试 与 20 人进行每日站立会议。

Success inevitably means that the engineering team will grow. On the one hand, that’s a good thing because more developers can get more done. The trouble with large teams is, as Fred Brooks wrote in The Mythical Man-Month, the communication overhead of a team of size N is O(N2). If the team gets too large, it will become inefficient, due to the communication overhead. Imagine, for example, trying to do a daily standup with 20 people.

解决方案是将一个大型单个团队重构为一个团队团队。每个团队都是小规模的,由不超过 8 到 12 人组成 人。它有一个明确定义的面向业务的使命:开发并可能运营一个或多个实现的服务 功能或业务功能。该团队是跨职能的,可以开发、测试和部署其服务,而无需 经常与其他团队沟通或协调。

The solution is to refactor a large single team into a team of teams. Each team is small, consisting of no more than 8–12 people. It has a clearly defined business-oriented mission: developing and possibly operating one or more services that implement a feature or a business capability. The team is cross-functional and can develop, test, and deploy its services without having to frequently communicate or coordinate with other teams.

反向康威机动

为了在使用微服务架构时有效地交付软件,您需要考虑 Conway 的 law (https://en.wikipedia.org/wiki/Conway%27s_law) 中规定的:

In order to effectively deliver software when using the microservice architecture, you need to take into account Conway’s law (https://en.wikipedia.org/wiki/Conway%27s_law), which states the following:

设计系统的组织...被限制生成作为通信结构副本的设计 这些组织的。

梅尔文·康威

Organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations.

Melvin Conway

换句话说,应用程序的体系结构反映了开发它的组织的结构。这很重要, 因此,反向应用康威定律 (www.thoughtworks.com/radar/techniques/inverse-conway-maneuver) 并设计您的组织,使其结构反映您的微服务架构。通过这样做,您可以确保 您的开发团队与 Services 一样松散耦合。

In other words, your application’s architecture mirrors the structure of the organization that developed it. It’s important, therefore, to apply Conway’s law in reverse (www.thoughtworks.com/radar/techniques/inverse-conway-maneuver) and design your organization so that its structure mirrors your microservice architecture. By doing so, you ensure that your development teams are as loosely coupled as the services.

团队的速度明显高于单个大型团队的速度。如前面的 1.5.1 节所述,微服务架构在使团队能够自治方面发挥着关键作用。每个团队都可以开发、部署和 在不与其他团队协调的情况下扩展他们的服务。此外,当服务不可用时,联系谁是非常清楚的 满足其 SLA。

The velocity of the team of teams is significantly higher than that of a single large team. As described earlier in section 1.5.1, the microservice architecture plays a key role in enabling the teams to be autonomous. Each team can develop, deploy, and scale their services without coordinating with other teams. Moreover, it’s very clear who to contact when a service isn’t meeting its SLA.

更重要的是,开发组织的可扩展性要强得多。您可以通过添加团队来发展组织。如果单个团队 变得太大,则将其及其关联的一个或多个服务拆分。由于团队是松散耦合的,因此可以避免 大型团队的通信开销。因此,您可以在不影响工作效率的情况下增加人员。

What’s more, the development organization is much more scalable. You grow the organization by adding teams. If a single team becomes too large, you split it and its associated service or services. Because the teams are loosely coupled, you avoid the communication overhead of a large team. As a result, you can add people without impacting productivity.

1.7.2. 软件开发和交付流程

1.7.2. Software development and delivery process

将微服务架构与瀑布式开发过程结合使用,就像驾驶一辆马拉的法拉利 — 您浪费了 使用微服务的大部分好处。如果您想使用微服务架构开发应用程序,则为 您必须采用敏捷开发和部署实践,例如 Scrum 或看板。更好的是,你应该练习 持续交付/部署,这是 DevOps 的一部分。

Using the microservice architecture with a waterfall development process is like driving a horse-drawn Ferrari—you squander most of the benefit of using microservices. If you want to develop an application with the microservice architecture, it’s essential that you adopt agile development and deployment practices such as Scrum or Kanban. Better yet, you should practice continuous delivery/deployment, which is a part of DevOps.

Jez Humble (https://continuousdelivery.com/) 对持续交付的定义如下:

Jez Humble (https://continuousdelivery.com/) defines continuous delivery as follows:

持续交付是获取所有类型的更改的能力,包括新功能、配置更改、错误修复和 实验 — 以可持续的方式安全快速地投入生产或交到用户手中。

Continuous Delivery is the ability to get changes of all types—including new features, configuration changes, bug fixes and experiments—into production, or into the hands of users, safely and quickly in a sustainable way.

持续交付的一个关键特征是软件始终是可发布的。它依赖于高水平的自动化, 包括自动化测试。持续部署使持续交付在自动实践中更进一步 将可发布的代码部署到生产环境中。实践持续部署的高绩效组织每天多次部署到生产环境中,生产中断次数要少得多,并且 从发生的任何情况中快速恢复 (https://puppet.com/resources/whitepaper/state-of-devops-report)。如前面的 1.5.1 节所述,微服务架构直接支持持续交付/部署。

A key characteristic of continuous delivery is that software is always releasable. It relies on a high level of automation, including automated testing. Continuous deployment takes continuous delivery one step further in the practice of automatically deploying releasable code into production. High-performing organizations that practice continuous deployment deploy multiple times per day into production, have far fewer production outages, and recover quickly from any that do occur (https://puppet.com/resources/whitepaper/state-of-devops-report). As described earlier in section 1.5.1, the microservice architecture directly supports continuous delivery/deployment.

快速移动而不破坏事物

持续交付/部署(更普遍地说,DevOps)的目标是快速而可靠地交付软件。四 用于评估软件开发的有用指标如下:

The goal of continuous delivery/deployment (and, more generally, DevOps) is to rapidly yet reliably deliver software. Four useful metrics for assessing software development are as follows:

  • 部署频率将软件部署到生产环境中的频率
  • Deployment frequencyHow often software is deployed into production
  • 提前期从开发人员签入更改到部署该更改的时间
  • Lead timeTime from a developer checking in a change to that change being deployed
  • 平均恢复时间是时候从生产问题中恢复了
  • Mean time to recoverTime to recover from a production problem
  • 更改失败率导致生产问题的更改百分比
  • Change failure ratePercentage of changes that result in a production problem

在传统组织中,部署频率低,交货时间长。压力大的开发人员和运营人员 人们通常会熬夜到深夜,在维护时段内解决最后一刻的问题。相比之下,DevOps 组织经常发布软件,通常每天发布多次,生产问题要少得多。例如,Amazon 在 2014 年(www.youtube.com/watch?v=dxk8b9rSKOo 年),每 11.6 秒将更改部署到生产环境中,Netflix 的一个软件组件 (https://medium.com/netflix-techblog/how-we-build-code-at-netflix-c5d9bd727f15) 的交付时间为 16 分钟。

In a traditional organization, the deployment frequency is low, and the lead time is high. Stressed-out developers and operations people typically stay up late into the night fixing last-minute issues during the maintenance window. In contrast, a DevOps organization releases software frequently, often multiple times per day, with far fewer production issues. Amazon, for example, deployed changes into production every 11.6 seconds in 2014 (www.youtube.com/watch?v=dxk8b9rSKOo), and Netflix had a lead time of 16 minutes for one software component (https://medium.com/netflix-techblog/how-we-build-code-at-netflix-c5d9bd727f15).

1.7.3. 采用微服务的人性一面

1.7.3. The human side of adopting microservices

采用微服务架构会改变您的架构、组织和开发流程。最终 不过,它改变了人们的工作环境,如前所述,人们是情绪化的动物。如果忽略,则其 情绪会使微服务的采用变得崎岖不平。Mary 和其他 FTGO 领导者将努力改变 FTGO 开发软件。

Adopting the microservice architecture changes your architecture, your organization, and your development processes. Ultimately, though, it changes the working environment of people, who are, as mentioned earlier, emotional creatures. If ignored, their emotions can make the adoption of microservices a bumpy ride. Mary and the other FTGO leaders will struggle to change how FTGO develops software.

William 和 Susan Bridges 的畅销书《管理过渡》(Da Capo Lifelong Books,2017 年,https://wmbridges.com/books 年)介绍了过渡的概念,它指的是人们如何对变化做出情绪反应的过程。它描述了一个三阶段的过渡模型:

The best-selling book Managing Transitions (Da Capo Lifelong Books, 2017, https://wmbridges.com/books) by William and Susan Bridges introduces the concept of a transition, which refers to the process of how people respond emotionally to a change. It describes a three-stage Transition Model:

  1. 结束、失去和放手—— 当人们面临迫使他们摆脱舒适的变化时,情绪动荡和抵抗的时期 区。他们经常为失去旧的做事方式而哀悼。例如,当人们重组为跨职能部门时 球队,他们想念他们的前队友。同样,拥有全局数据模型的数据建模组也将受到威胁 通过每个服务都有自己的数据模型的想法。
  2. Ending, Losing, and Letting GoThe period of emotional upheaval and resistance when people are presented with a change that forces them out of their comfort zone. They often mourn the loss of the old way of doing things. For example, when people reorganize into cross-functional teams, they miss their former teammates. Similarly, a data modeling group that owns the global data model will be threatened by the idea of each service having its own data model.
  3. 中立区新旧做事方式之间的中间阶段,人们经常感到困惑。他们经常在挣扎 学习新的做事方式。
  4. The Neutral ZoneThe intermediate stage between the old and new ways of doing things, where people are often confused. They are often struggling to learn the new way of doing things.
  5. 新的开始—— 最后阶段,人们热情地接受了新的做事方式,并开始体验 好处。
  6. The New BeginningThe final stage where people have enthusiastically embraced the new way of doing things and are starting to experience the benefits.

这本书描述了如何最好地管理过渡的每个阶段并提高成功实施的可能性 变化。FTGO 肯定遭受了单体地狱的困扰,需要迁移到微服务架构。它必须 也改变了它的组织和开发流程。然而,为了让 FTGO 成功完成这一点,它必须 考虑过渡模型,考虑人们的情绪。

The book describes how best to manage each stage of the transition and increase the likelihood of successfully implementing the change. FTGO is certainly suffering from monolithic hell and needs to migrate to a microservice architecture. It must also change its organization and development processes. In order for FTGO to successfully accomplish this, however, it must take into account the transition model and consider people’s emotions.

在下一章中,您将了解软件架构的目标以及如何将应用程序分解为服务。

In the next chapter, you’ll learn about the goal of software architecture and how to decompose an application into services.

总结

Summary

  • 整体式架构模式将应用程序构建为单个可部署单元。
  • The Monolithic architecture pattern structures the application as a single deployable unit.
  • 微服务架构模式将系统分解为一组可独立部署的服务,每个服务都有自己的服务 数据库。
  • The Microservice architecture pattern decomposes a system into a set of independently deployable services, each with its own database.
  • 整体式架构是简单应用程序的不错选择,但微服务架构通常是更好的选择 适用于大型复杂应用。
  • The monolithic architecture is a good choice for simple applications, but microservice architecture is usually a better choice for large, complex applications.
  • 微服务架构使小型自治团队能够工作,从而加快了软件开发的速度 并行。
  • The microservice architecture accelerates the velocity of software development by enabling small, autonomous teams to work in parallel.
  • 微服务架构不是灵丹妙药,但存在重大缺点,包括复杂性。
  • The microservice architecture isn’t a silver bullet—there are significant drawbacks, including complexity.
  • 微服务架构模式语言是一组模式,可帮助您使用 微服务架构。它可以帮助您决定是否使用微服务架构,以及是否选择微服务 架构,模式语言可以帮助您有效地应用它。
  • The Microservice architecture pattern language is a collection of patterns that help you architect an application using the microservice architecture. It helps you decide whether to use the microservice architecture, and if you pick the microservice architecture, the pattern language helps you apply it effectively.
  • 您需要的不仅仅是微服务架构来加速软件交付。成功的软件开发 需要 DevOps 和小型自治团队。
  • You need more than just the microservice architecture to accelerate software delivery. Successful software development also requires DevOps and small, autonomous teams.
  • 不要忘记采用微服务的人为因素。您需要考虑员工的情绪才能成功 过渡到微服务架构。
  • Don’t forget about the human side of adopting microservices. You need to consider employees’ emotions in order to successfully transition to a microservice architecture.

第 2 章.分解策略

Chapter 2. Decomposition strategies

本章涵盖

This chapter covers

  • 了解软件架构及其重要性
  • Understanding software architecture and why it’s important
  • 通过应用分解模式将应用程序分解为服务:按业务能力分解和分解 按子域
  • Decomposing an application into services by applying the decomposition patterns Decompose by business capability and Decompose by subdomain
  • 使用域驱动设计 (DDD) 中的界定上下文概念来理清数据并简化分解
  • Using the bounded context concept from domain-driven design (DDD) to untangle data and make decomposition easier

有时你必须小心你的愿望。经过激烈的游说努力,Mary 终于说服了这家公司 迁移到微服务架构是正确的做法。感到兴奋和恐惧的混合, Mary 与她的建筑师举行了长达一上午的会议,讨论从哪里开始。在讨论过程中,很明显 微服务架构模式语言的某些方面,比如部署和服务发现,是新的和不熟悉的。 但又直截了当。关键挑战是微服务架构的本质,是功能分解 的应用程序转换为服务。因此,架构的第一个也是最重要的方面是服务的定义。当他们站在白板周围时,FTGO 团队想知道究竟该怎么做!

Sometimes you have to be careful what you wish for. After an intense lobbying effort, Mary had finally convinced the business that migrating to a microservice architecture was the right thing to do. Feeling a mixture of excitement and some trepidation, Mary had a morning-long meeting with her architects to discuss where to begin. During the discussion, it became apparent that some aspects of the Microservice architecture pattern language, such as deployment and service discovery, were new and unfamiliar, yet straightforward. The key challenge, which is the essence of the microservice architecture, is the functional decomposition of the application into services. The first and most important aspect of the architecture is, therefore, the definition of the services. As they stood around the whiteboard, the FTGO team wondered exactly how to do that!

在本章中,您将学习如何为应用程序定义微服务架构。我描述了分解的策略 一个应用程序进入服务。您将了解到,服务是围绕业务关注点而不是技术关注点组织的。 我还展示了如何使用域驱动设计 (DDD) 中的思想来消除 god 类,这些类是贯穿始终的类 应用程序,并导致阻止分解的纠结依赖项。

In this chapter, you’ll learn how to define a microservice architecture for an application. I describe strategies for decomposing an application into services. You’ll learn that services are organized around business concerns rather than technical concerns. I also show how to use ideas from domain-driven design (DDD) to eliminate god classes, which are classes that are used throughout an application and cause tangled dependencies that prevent decomposition.

在本章开始时,我根据软件架构概念定义微服务架构。之后,我描述 从应用程序的需求开始为应用程序定义微服务架构的过程。我讨论策略 将应用程序分解为服务集合、障碍以及如何克服这些障碍。让我们从检查 软件架构的概念。

I begin this chapter by defining the microservice architecture in terms of software architecture concepts. After that, I describe a process for defining a microservice architecture for an application starting from its requirements. I discuss strategies for decomposing an application into a collection of services, obstacles to it, and how to overcome them. Let’s start by examining the concept of software architecture.

2.1. 微服务架构到底是什么?

2.1. What is the microservice architecture exactly?

第 1 章介绍了微服务架构的关键思想是如何实现功能分解的。而不是开发一个大型 application 中,您可以将应用程序构建为一组服务。一方面,将微服务架构描述为 一种功能分解是有用的。但另一方面,它留下了几个问题没有答案,包括如何 微服务架构是否与更广泛的软件架构概念相关?什么是服务?以及有多重要 是服务的大小吗?

Chapter 1 describes how the key idea of the microservice architecture is functional decomposition. Instead of developing one large application, you structure the application as a set of services. On one hand, describing the microservice architecture as a kind of functional decomposition is useful. But on the other hand, it leaves several questions unanswered, including how does the microservice architecture relate to the broader concepts of software architecture? What’s a service? And how important is the size of a service?

为了回答这些问题,我们需要退后一步,看看软件架构的含义。软件应用程序的体系结构是其高级结构,它由组成部分和依赖项组成 在这些部分之间。正如您将在本节中看到的,应用程序的体系结构是多维的,因此有多个 来描述它。体系结构之所以重要,是因为它决定了应用程序的软件质量属性 或 -ilities。传统上,架构的目标是可扩展性、可靠性和安全性。但今天重要的是, Architecture 还支持快速、安全地交付软件。您将了解到微服务架构是一种架构 样式,从而为应用程序提供高可维护性、可测试性和可部署性。

In order to answer those questions, we need to take a step back and look at what is meant by software architecture. The architecture of a software application is its high-level structure, which consists of constituent parts and the dependencies between those parts. As you’ll see in this section, an application’s architecture is multidimensional, so there are multiple ways to describe it. The reason architecture is important is because it determines the application’s software quality attributes or -ilities. Traditionally, the goal of architecture has been scalability, reliability, and security. But today it’s important that the architecture also enables the rapid and safe delivery of software. You’ll learn that the microservice architecture is an architecture style that gives an application high maintainability, testability, and deployability.

在本节开始时,我将介绍软件架构的概念及其重要性。接下来,我将讨论建筑风格的概念。然后,我将微服务架构定义为 一种特定的建筑风格。让我们先看一下软件架构的概念。

I begin this section by describing the concept of software architecture and why it’s important. Next, I discuss the idea of an architectural style. Then I define the microservice architecture as a particular architectural style. Let’s start by looking at the concept of software architecture.

2.1.1. 什么是软件架构,为什么它很重要?

2.1.1. What is software architecture and why does it matter?

架构显然很重要。至少有两个会议专门讨论这个主题:O'Reilly Software Architecture 会议 (https://conferences.oreilly.com/software-architecture) 和 SATURN 会议 (https://resources.sei.cmu.edu/news-events/events/saturn/)。许多开发人员的目标是成为一名架构师。但什么是建筑,为什么它很重要?

Architecture is clearly important. There are at least two conferences dedicated to the topic: O’Reilly Software Architecture Conference (https://conferences.oreilly.com/software-architecture) and the SATURN conference (https://resources.sei.cmu.edu/news-events/events/saturn/). Many developers have the goal of becoming an architect. But what is architecture and why does it matter?

为了回答这个问题,我首先定义了术语 Software Architecture 的含义。之后,我将讨论应用程序的体系结构如何是多维的,最好使用 视图或蓝图。然后,我描述了软件架构之所以重要,是因为它对应用程序软件的影响 quality 属性。

To answer that question, I first define what is meant by the term software architecture. After that, I discuss how an application’s architecture is multidimensional and is best described using a collection of views or blueprints. I then describe that software architecture matters because of its impact on the application’s software quality attributes.

软件架构的定义

软件架构有许多定义。例如,请参阅 https://en.wikiquote.org/wiki/Software_architecture 以阅读其中的一些内容。我最喜欢的定义来自软件工程研究所 (www.sei.cmu.edu) 的 Len Bass 和同事,他们在将软件体系结构建立为一门学科方面发挥了关键作用。它们将软件架构定义如下:

There are numerous definitions of software architecture. For example, see https://en.wikiquote.org/wiki/Software_architecture to read some of them. My favorite definition comes from Len Bass and colleagues at the Software Engineering Institute (www.sei.cmu.edu), who played a key role in establishing software architecture as a discipline. They define software architecture as follows:

计算系统的软件架构是推理系统所需的一组结构,包括 软件元素、它们之间的关系以及两者的属性。

Bass 等人的 Documenting Software Architectures。

The software architecture of a computing system is the set of structures needed to reason about the system, which comprise software elements, relations among them, and properties of both.

Documenting Software Architectures by Bass et al.

这显然是一个相当抽象的定义。但其本质是应用程序的架构是将其分解为 parts (元素) 以及这些 parts 之间的关系 (关系)。分解对于几个 原因:

That’s obviously a quite abstract definition. But its essence is that an application’s architecture is its decomposition into parts (the elements) and the relationships (the relations) between those parts. Decomposition is important for a couple of reasons:

  • 它促进了劳动和知识的分工。它使多个人(或多个团队)能够使用可能专门的 知识,以便在应用程序上高效地协同工作。
  • It facilitates the division of labor and knowledge. It enables multiple people (or multiple teams) with possibly specialized knowledge to work productively together on an application.
  • 它定义了软件元素的交互方式。
  • It defines how the software elements interact.

分解为多个部分以及这些部分之间的关系决定了应用程序的 -ilities

It’s the decomposition into parts and the relationships between those parts that determine the application’s -ilities.

软件架构的 4+1 视图模型

更具体地说,可以从多个角度查看应用程序的体系结构,就像建筑物的 可以从结构、管道、电气和其他角度查看建筑。菲利普·克鲁琴 (Phillip Krutchen) 写了一部经典之作 描述软件架构的 4+1 视图模型的论文,“架构蓝图 — 软件架构的'4+1'视图模型” (www.cs.ubc.ca/~gregor/teaching/papers/4+1view-architecture.pdf)。图 2.1 所示的 4+1 模型定义了软件架构的四种不同视图。每个 Cookie 都描述了架构的一个特定方面,包括 一组特定的软件元素以及它们之间的关系。

More concretely, an application’s architecture can be viewed from multiple perspectives, in the same way that a building’s architecture can be viewed from structural, plumbing, electrical, and other perspectives. Phillip Krutchen wrote a classic paper describing the 4+1 view model of software architecture, “Architectural Blueprints—The ‘4+1’ View Model of Software Architecture” (www.cs.ubc.ca/~gregor/teaching/papers/4+1view-architecture.pdf). The 4+1 model, shown in Figure 2.1, defines four different views of a software architecture. Each describes a particular aspect of the architecture and consists of a particular set of software elements and relationships between them.

图 2.1.4+1 视图模型使用四个视图以及显示元素如何的场景来描述应用程序的架构 在每个视图中,协作处理请求。

每个视图的用途如下:

The purpose of each view is as follows:

  • 逻辑视图 - 由开发人员创建的软件元素。在面向对象的语言中,这些元素是类和包。 它们之间的关系是类和包之间的关系,包括继承、关联和依赖。
  • Logical viewThe software elements that are created by developers. In object-oriented languages, these elements are classes and packages. The relations between them are the relationships between classes and packages, including inheritance, associations, and depends-on.
  • 实施视图 - 生成系统的输出。此视图由模块(表示打包代码)和组件(由一个或多个模块组成的可执行或可部署单元)组成。在 Java 中,模块是 JAR 文件,组件通常是 WAR 文件或可执行 JAR 文件。它们之间的关系包括模块和组合之间的依赖关系 组件和模块之间的关系。
  • Implementation viewThe output of the build system. This view consists of modules, which represent packaged code, and components, which are executable or deployable units consisting of one or more modules. In Java, a module is a JAR file, and a component is typically a WAR file or an executable JAR file. The relations between them include dependency relationships between modules and composition relationships between components and modules.
  • 进程视图 - 运行时的组件。每个元素都是一个进程,进程之间的关系表示进程间通信。
  • Process viewThe components at runtime. Each element is a process, and the relations between processes represent interprocess communication.
  • 部署 - 如何将进程映射到计算机。此视图中的元素由 (物理或虚拟) 计算机和进程组成。 机器之间的关系代表网络。此视图还描述了进程和机器之间的关系。
  • DeploymentHow the processes are mapped to machines. The elements in this view consist of (physical or virtual) machines and the processes. The relations between machines represent networking. This view also describes the relationship between processes and machines.

除了这四个视图之外,还有一些场景(4+1 模型中的 +1)可以让视图生动起来。每个场景都描述了 特定视图中的各种架构组件如何协作以处理请求。A scenario in 例如,逻辑视图显示了类如何协作。同样,流程视图中的场景显示了 流程协作。

In addition to these four views, there are the scenarios—the +1 in the 4+1 model—that animate views. Each scenario describes how the various architectural components within a particular view collaborate in order to handle a request. A scenario in the logical view, for example, shows how the classes collaborate. Similarly, a scenario in the process view shows how the processes collaborate.

4+1 视图模型是描述应用程序体系结构的绝佳方式。每个视图都描述了一个重要的方面 的架构,这些场景说明了视图的元素如何协作。现在让我们看看为什么架构很重要。

The 4+1 view model is an excellent way to describe an applications’s architecture. Each view describes an important aspect of the architecture, and the scenarios illustrate how the elements of a view collaborate. Let’s now look at why architecture is important.

为什么架构很重要

应用程序有两类要求。第一类包括功能需求,它定义了应用程序必须做什么。它们通常采用用例或用户故事的形式。建筑 与功能需求几乎没有关系。您几乎可以使用任何架构实现功能需求。 甚至是一个大泥球。

An application has two categories of requirements. The first category includes the functional requirements, which define what the application must do. They’re usually in the form of use cases or user stories. Architecture has very little to do with the functional requirements. You can implement functional requirements with almost any architecture, even a big ball of mud.

体系结构很重要,因为它使应用程序能够满足第二类需求:其服务质量要求。这些也称为质量属性,即所谓的 -ilities。服务质量要求定义了运行时质量,例如可扩展性和可靠性。它们还定义了开发 时间质量,包括可维护性、可测试性和可部署性。您为应用程序选择的架构 确定它满足这些质量要求的程度。

Architecture is important because it enables an application to satisfy the second category of requirements: its quality of service requirements. These are also known as quality attributes and are the so-called -ilities. The quality of service requirements define the runtime qualities such as scalability and reliability. They also define development time qualities including maintainability, testability, and deployability. The architecture you choose for your application determines how well it meets these quality requirements.

2.1.2. 架构风格概述

2.1.2. Overview of architectural styles

在现实世界中,建筑物的建筑通常遵循特定的风格,例如维多利亚时代、美国工匠、 或装饰艺术。每种样式都是一组设计决策,用于限制建筑物的特征和建筑材料。这 架构风格的概念也适用于软件。David Garlan 和 Mary Shaw(软件架构简介、 1994 年 1 月,https://www.cs.cmu.edu/afs/cs/project/able/ftp/intro_softarch/intro_softarch.pdf 年)软件体系结构学科的先驱们对体系结构样式的定义如下:

In the physical world, a building’s architecture often follows a particular style, such as Victorian, American Craftsman, or Art Deco. Each style is a package of design decisions that constrains a building’s features and building materials. The concept of architectural style also applies to software. David Garlan and Mary Shaw (An Introduction to Software Architecture, January 1994, https://www.cs.cmu.edu/afs/cs/project/able/ftp/intro_softarch/intro_softarch.pdf), pioneers in the discipline of software architecture, define an architectural style as follows:

因此,体系结构风格根据结构组织模式定义此类系统的系列。更具体地说, 架构样式决定了可以在该样式的实例中使用的组件和连接器的词汇表。 以及一组关于如何组合它们的约束。

An architectural style, then, defines a family of such systems in terms of a pattern of structural organization. More specifically, an architectural style determines the vocabulary of components and connectors that can be used in instances of that style, together with a set of constraints on how they can be combined.

特定的架构样式提供了元素 (组件) 和关系 (连接器) 的有限调色板,其中 您可以定义应用程序体系结构的视图。应用程序通常使用体系结构样式的组合。 例如,在本节的后面部分,我将介绍整体式架构如何成为一种架构样式,它构建了 implementation 视图作为单个 (可执行 / 可部署) 组件。微服务架构构建应用程序 作为一组松散耦合的服务。

A particular architectural style provides a limited palette of elements (components) and relations (connectors) from which you can define a view of your application’s architecture. An application typically uses a combination of architectural styles. For example, later in this section I describe how the monolithic architecture is an architectural style that structures the implementation view as a single (executable/deployable) component. The microservice architecture structures an application as a set of loosely coupled services.

分层建筑风格

架构风格的经典示例是分层架构。分层体系结构将软件元素组织成多个层。每个层都有一组明确定义的职责。分层架构 约束层之间的依赖关系。一个层只能依赖于它下面的层(如果严格 layering) 或其下面的任何图层。

The classic example of an architectural style is the layered architecture. A layered architecture organizes software elements into layers. Each layer has a well-defined set of responsibilities. A layered architecture also constraints the dependencies between the layers. A layer can only depend on either the layer immediately below it (if strict layering) or any of the layers below it.

您可以将分层架构应用于前面讨论的 4 个视图中的任意一个。流行的三层架构是 应用于 Logical View 的分层体系结构。它将应用程序的类组织到以下层或层中:

You can apply the layered architecture to any of the four views discussed earlier. The popular three-tier architecture is the layered architecture applied to the logical view. It organizes the application’s classes into the following tiers or layers:

  • 表示层 - 包含实现用户界面或外部 API 的代码
  • Presentation layerContains code that implements the user interface or external APIs
  • 业务逻辑层包含业务逻辑
  • Business logic layerContains the business logic
  • 持久层实现与数据库交互的逻辑
  • Persistence layerImplements the logic of interacting with the database

分层体系结构是体系结构样式的一个很好的示例,但它确实有一些明显的缺点:

The layered architecture is a great example of an architectural style, but it does have some significant drawbacks:

  • 单个表示层它并不表示应用程序可能被多个系统调用的事实。
  • Single presentation layerIt doesn’t represent the fact that an application is likely to be invoked by more than just a single system.
  • 单个持久层它并不表示应用程序可能与多个数据库交互的事实。
  • Single persistence layerIt doesn’t represent the fact that an application is likely to interact with more than just a single database.
  • 根据持久层定义业务逻辑层从理论上讲,这种依赖关系会阻止您在没有数据库的情况下测试业务逻辑。
  • Defines the business logic layer as depending on the persistence layerIn theory, this dependency prevents you from testing the business logic without the database.

此外,分层体系结构歪曲了设计良好的应用程序中的依赖项。业务逻辑通常 定义定义数据访问方法的接口或接口存储库。持久性层定义 DAO 类 实现存储库接口。换句话说,依赖关系与 layered 建筑。

Also, the layered architecture misrepresents the dependencies in a well-designed application. The business logic typically defines an interface or a repository of interfaces that define data access methods. The persistence tier defines DAO classes that implement the repository interfaces. In other words, the dependencies are the reverse of what’s depicted by a layered architecture.

让我们看看一种克服这些缺点的替代架构:六边形架构。

Let’s look at an alternative architecture that overcomes these drawbacks: the hexagonal architecture.

关于六边形架构样式

六边形架构是分层架构风格的替代方案。如图 2.2 所示,六边形架构风格以将业务逻辑逻辑置于中心的方式组织逻辑视图。 应用程序没有表示层,而是具有一个或多个入站适配器,这些适配器通过调用业务逻辑来处理来自外部的请求。同样,不是数据持久性层,而是 应用程序具有一个或多个出站适配器,这些适配器由业务逻辑调用并调用外部应用程序。此体系结构的主要特征和优势 是业务逻辑不依赖于适配器。相反,他们依赖于它。

Hexagonal architecture is an alternative to the layered architectural style. As figure 2.2 shows, the hexagonal architecture style organizes the logical view in a way that places the business logic at the center. Instead of the presentation layer, the application has one or more inbound adapters that handle requests from the outside by invoking the business logic. Similarly, instead of a data persistence tier, the application has one or more outbound adapters that are invoked by the business logic and invoke external applications. A key characteristic and benefit of this architecture is that the business logic doesn’t depend on the adapters. Instead, they depend upon it.

图 2.2.六边形体系结构的示例,它由业务逻辑和一个或多个与之通信的适配器组成 外部系统。业务逻辑具有一个或多个端口。入站适配器,用于处理来自外部系统的请求, 调用入站端口。出站适配器实现出站端口,并调用外部系统。

业务逻辑具有一个或多个端口。端口定义一组操作,是业务逻辑与其外部内容交互的方式。例如,在 Java 中,一个端口 通常是一个 Java 接口。有两种类型的端口:入站端口和出站端口。入站端口是由 业务逻辑,使其能够由外部应用程序调用。入站端口的一个示例是服务接口, 它定义服务的公共方法。出站端口是业务逻辑调用外部系统的方式。示例 输出端口是存储库接口,它定义数据访问操作的集合。

The business logic has one or more ports. A port defines a set of operations and is how the business logic interacts with what’s outside of it. In Java, for example, a port is often a Java interface. There are two kinds of ports: inbound and outbound ports. An inbound port is an API exposed by the business logic, which enables it to be invoked by external applications. An example of an inbound port is a service interface, which defines a service’s public methods. An outbound port is how the business logic invokes external systems. An example of an output port is a repository interface, which defines a collection of data access operations.

围绕业务逻辑的是适配器。与端口一样,有两种类型的适配器:入站和出站。入站 adapter 通过调用入站端口来处理来自外部世界的请求。入站适配器的一个示例是 Spring MVC 控制器,用于实现一组 REST 端点或一组网页。另一个示例是消息代理客户端 订阅消息。多个入站适配器可以调用同一个入站端口。

Surrounding the business logic are adapters. As with ports, there are two types of adapters: inbound and outbound. An inbound adapter handles requests from the outside world by invoking an inbound port. An example of an inbound adapter is a Spring MVC Controller that implements either a set of REST endpoints or a set of web pages. Another example is a message broker client that subscribes to messages. Multiple inbound adapters can invoke the same inbound port.

出站适配器实现出站端口,并通过调用外部应用程序来处理来自业务逻辑的请求 或服务。出站适配器的一个示例是数据访问对象 (DAO) 类,该类实现用于访问数据库的操作。另一个示例是调用远程 服务。出站适配器还可以发布事件。

An outbound adapter implements an outbound port and handles requests from the business logic by invoking an external application or service. An example of an outbound adapter is a data access object (DAO) class that implements operations for accessing a database. Another example would be a proxy class that invokes a remote service. Outbound adapters can also publish events.

六边形架构风格的一个重要好处是它将业务逻辑与表示和 适配器中的数据访问逻辑。业务逻辑不依赖于表示逻辑或数据访问逻辑。由于这种解耦,孤立地测试业务逻辑要容易得多。另一个好处是它更准确 反映了现代应用程序的体系结构。业务逻辑可以通过多个适配器调用,每个适配器 实现特定的 API 或 UI。业务逻辑还可以调用多个适配器,每个适配器调用不同的 外部系统。六边形架构是描述微服务架构中每个服务的架构的好方法。

An important benefit of the hexagonal architectural style is that it decouples the business logic from the presentation and data access logic in the adapters. The business logic doesn’t depend on either the presentation logic or the data access logic. Because of this decoupling, it’s much easier to test the business logic in isolation. Another benefit is that it more accurately reflects the architecture of a modern application. The business logic can be invoked via multiple adapters, each of which implements a particular API or UI. The business logic can also invoke multiple adapters, each one of which invokes a different external system. Hexagonal architecture is a great way to describe the architecture of each service in a microservice architecture.

分层和六边形架构都是架构风格的示例。每个 API 都定义了 架构,并对它们之间的关系施加约束。六边形架构和分层架构, 以三层体系结构的形式,组织 Logical View。现在,我们将微服务架构定义为 组织 Implementation View 的架构样式。

The layered and hexagonal architectures are both examples of architectural styles. Each defines the building blocks of an architecture and imposes constraints on the relationships between them. The hexagonal architecture and the layered architecture, in the form of a three-tier architecture, organize the logical view. Let’s now define the microservice architecture as an architectural style that organizes the implementation view.

2.1.3. 微服务架构是一种架构风格

2.1.3. The microservice architecture is an architectural style

我已经讨论了 4+1 视图模型和架构样式,因此我现在可以定义整体式和微服务架构。 它们都是建筑风格。整体式架构是一种构建实现视图的架构样式 作为单个组件:单个可执行文件或 WAR 文件。这个定义没有提到其他观点。整体式应用程序 例如,可以具有一个按照六边形体系结构的路线组织的逻辑视图。

I’ve discussed the 4+1 view model and architectural styles, so I can now define monolithic and microservice architecture. They’re both architectural styles. Monolithic architecture is an architectural style that structures the implementation view as a single component: a single executable or WAR file. This definition says nothing about the other views. A monolithic application can, for example, have a logical view that’s organized along the lines of a hexagonal architecture.

模式:整体式架构

将应用程序构建为单个可执行/可部署组件。请参阅 http://microservices.io/patterns/monolithic.html

Structure the application as a single executable/deployable component. See http://microservices.io/patterns/monolithic.html.

微服务架构也是一种架构风格。它将 implementation 视图构建为一组多个组件: 可执行文件或 WAR 文件。组件是服务,连接器是实现这些服务的通信协议 服务进行协作。每个服务都有自己的逻辑视图架构,通常是六边形架构。图 2.3 显示了 FTGO 应用程序可能的微服务架构。此体系结构中的服务对应于业务 功能,例如订单管理和餐厅管理。

The microservice architecture is also an architectural style. It structures the implementation view as a set of multiple components: executables or WAR files. The components are services, and the connectors are the communication protocols that enable those services to collaborate. Each service has its own logical view architecture, which is typically a hexagonal architecture. Figure 2.3 shows a possible microservice architecture for the FTGO application. The services in this architecture correspond to business capabilities, such as Order management and Restaurant management.

模式:微服务架构

将应用程序构建为松散耦合、可独立部署的服务集合。请参阅 http://microservices.io/patterns/microservices.html

Structure the application as a collection of loosely coupled, independently deployable services. See http://microservices.io/patterns/microservices.html.

图 2.3.FTGO 应用程序可能的微服务架构。它由许多服务组成。

在本章的后面,我将介绍业务能力的含义。服务之间的连接器是使用进程间通信机制(如 REST API 和异步)实现的 消息。第 3 章更详细地讨论了进程间通信。

Later in this chapter, I describe what is meant by business capability. The connectors between services are implemented using interprocess communication mechanisms such as REST APIs and asynchronous messaging. Chapter 3 discusses interprocess communication in more detail.

微服务架构施加的一个关键约束是服务是松散耦合的。因此,有 对服务协作方式的限制。为了解释这些限制,我将尝试定义术语 service,描述松散耦合的含义,并告诉您为什么这很重要。

A key constraint imposed by the microservice architecture is that the services are loosely coupled. Consequently, there are restrictions on how the services collaborate. In order to explain those restrictions, I’ll attempt to define the term service, describe what it means to be loosely coupled, and tell you why this matters.

什么是服务?

服务是一个独立的、可独立部署的软件组件,它实现了一些有用的功能。图 2.4 显示了服务的外部视图,在此示例中为 .服务具有一个 API,该 API 为其客户端提供对其功能的访问权限。有两种类型的操作:命令 和查询。API 由命令、查询和事件组成。命令(如 )执行操作并更新数据。查询(如 )检索数据。服务还会发布事件,例如 ,这些事件由其客户端使用。Order ServicecreateOrder()findOrderById()OrderCreated

A service is a standalone, independently deployable software component that implements some useful functionality. Figure 2.4 shows the external view of a service, which in this example is the Order Service. A service has an API that provides its clients access to its functionality. There are two types of operations: commands and queries. The API consists of commands, queries, and events. A command, such as createOrder(), performs actions and updates data. A query, such as findOrderById(), retrieves data. A service also publishes events, such as OrderCreated, which are consumed by its clients.

图 2.4.服务具有封装实现的 API。API 定义由客户端调用的操作。那里 有两种类型的操作:命令更新数据和查询检索数据。当其数据发生更改时,服务会发布事件 客户端可以订阅的。

服务的 API 封装了其内部实现。与整体式架构不同,开发人员无法编写绕过 它的 API。因此,微服务架构强制实施了应用程序的模块化。

A service’s API encapsulates its internal implementation. Unlike in a monolith, a developer can’t write code that bypasses its API. As a result, the microservice architecture enforces the application’s modularity.

微服务架构中的每个服务都有自己的架构,并且可能还有技术堆栈。但典型的服务 具有六边形架构。它的 API 由与服务的业务逻辑交互的适配器实现。操作适配器调用业务逻辑,事件适配器发布业务逻辑发出的事件。

Each service in a microservice architecture has its own architecture and, potentially, technology stack. But a typical service has a hexagonal architecture. Its API is implemented by adapters that interact with the service’s business logic. The operations adapter invokes the business logic, and the events adapter publishes events emitted by the business logic.

第 12 章的后面部分,当我讨论部署技术时,您将看到服务的实现视图可以采用多种形式。组件 可能是独立进程、在容器中运行的 Web 应用程序或 OSGI 捆绑包,或者是无服务器云函数。一 但是,基本要求是服务具有 API 并且可以独立部署。

Later in chapter 12, when I discuss deployment technologies, you’ll see that the implementation view of a service can take many forms. The component might be a standalone process, a web application or OSGI bundle running in a container, or a serverless cloud function. An essential requirement, however, is that a service has an API and is independently deployable.

什么是松耦合?

微服务架构的一个重要特征是服务是松散耦合的 (https://en.wikipedia.org/wiki/Loose_coupling)。与服务的所有交互都通过其 API 进行,该 API 封装了其实现细节。这将启用实现 的服务进行更改而不影响其客户端。松散耦合的服务是改进应用程序开发的关键 time 属性,包括其可维护性和可测试性。它们更容易理解、更改和测试。

An important characteristic of the microservice architecture is that the services are loosely coupled (https://en.wikipedia.org/wiki/Loose_coupling). All interaction with a service happens via its API, which encapsulates its implementation details. This enables the implementation of the service to change without impacting its clients. Loosely coupled services are key to improving an application’s development time attributes, including its maintainability and testability. They are much easier to understand, change, and test.

服务松散耦合且仅通过 API 进行协作的要求禁止服务进行通信 通过数据库。您必须将服务的持久性数据视为类的字段,并保持其私有状态。保留数据 private 使开发人员能够更改其服务的数据库架构,而无需花时间与开发其他服务的开发人员协调。不共享数据库表还可以提高运行时隔离性。 例如,它确保一个服务无法持有阻止另一个服务的数据库锁。不过,稍后您将学习 不共享数据库的一个缺点是,维护数据一致性和跨服务查询更加复杂。

The requirement for services to be loosely coupled and to collaborate only via APIs prohibits services from communicating via a database. You must treat a service’s persistent data like the fields of a class and keep them private. Keeping the data private enables a developer to change their service’s database schema without having to spend time coordinating with developers working on other services. Not sharing database tables also improves runtime isolation. It ensures, for example, that one service can’t hold database locks that block another service. Later on, though, you’ll learn that one downside of not sharing databases is that maintaining data consistency and querying across services are more complex.

共享库的作用

开发人员通常将功能打包到库(模块)中,以便多个应用程序可以重用它,而无需复制 法典。毕竟,如果没有 Maven 或 npm 存储库,我们今天会在哪里?您可能还想使用共享库 在微服务架构中。从表面上看,这似乎是减少服务中代码重复的好方法。但是你 需要确保不会意外地在服务之间引入耦合。

Developers often package functionality in a library (module) so that it can be reused by multiple applications without duplicating code. After all, where would we be today without Maven or npm repositories? You might be tempted to also use shared libraries in microservice architecture. On the surface, it looks like a good way to reduce code duplication in your services. But you need to ensure that you don’t accidentally introduce coupling between your services.

例如,假设多个服务需要更新业务对象。一种方法是将该功能打包为多个服务使用的库。一方面, 使用库可以消除代码重复。另一方面,考虑当需求以某种方式发生变化时会发生什么 ,这会影响业务对象。您需要同时重新构建和重新部署这些服务。更好的方法是 实现可能更改的功能,例如管理即服务。OrderOrderOrder

Imagine, for example, that multiple services need to update the Order business object. One approach is to package that functionality as a library that’s used by multiple services. On one hand, using a library eliminates code duplication. On the other hand, consider what happens when the requirements change in a way that affects the Order business object. You would need to simultaneously rebuild and redeploy those services. A much better approach would be to implement functionality that’s likely to change, such as Order management, as a service.

您应该努力将库用于不太可能更改的功能。例如,在典型的应用中,它使 每个服务都实现泛型类没有意义。相反,您应该创建一个供服务使用的库。Money

You should strive to use libraries for functionality that’s unlikely to change. For example, in a typical application it makes no sense for every service to implement a generic Money class. Instead, you should create a library that’s used by the services.

服务的大小大多不重要

微服务这个术语的一个问题是,你首先听到的是 micro。这表明服务应该非常小。其他基于大小的术语(如 miniservice 或 nanoservice)也是如此。 实际上,大小并不是一个有用的指标。

One problem with the term microservice is that the first thing you hear is micro. This suggests that a service should be very small. This is also true of other size-based terms such as miniservice or nanoservice. In reality, size isn’t a useful metric.

更好的目标是将设计良好的服务定义为能够由一个小团队开发的服务 交货时间,并且与其他团队的合作最少。理论上,一个团队可能只负责一个服务 因此,这项服务绝不是微小的。相反,如果一项服务需要一个大型团队或需要很长时间进行测试,那么将团队和 服务。或者,如果您由于其他服务的更改或触发更改而不断需要更改服务 在其他服务中,这表明它不是松散耦合的。您甚至可能已经构建了一个分布式整体式架构。

A much better goal is to define a well-designed service to be a service capable of being developed by a small team with minimal lead time and with minimal collaboration with other teams. In theory, a team might only be responsible for a single service, so that service is by no means micro. Conversely, if a service requires a large team or takes a long time to test, it probably makes sense to split the team and the service. Or if you constantly need to change a service because of changes to other services or if it’s triggering changes in other services, that’s a sign that it’s not loosely coupled. You might even have built a distributed monolith.

微服务架构将应用程序构建为一组小型、松散耦合的服务。因此,它改善了 开发时属性 — 可维护性、可测试性、可部署性等 — 并使组织能够开发 更好的软件更快。它还提高了应用程序的可伸缩性,尽管这不是主要目标。开发微服务 architecture 中,您需要确定服务并确定它们如何协作。让我们看看如何 来做到这一点。

The microservice architecture structures an application as a set of small, loosely coupled services. As a result, it improves the development time attributes—maintainability, testability, deployability, and so on—and enables an organization to develop better software faster. It also improves an application’s scalability, although that’s not the main goal. To develop a microservice architecture for your application, you need to identify the services and determine how they collaborate. Let’s look at how to do that.

2.2. 定义应用程序的微服务架构

2.2. Defining an application’s microservice architecture

我们应该如何定义微服务架构?与任何软件开发工作一样,起点是书面的 需求,希望是领域专家,也许还有现有的应用程序。与许多软件开发一样,定义 建筑与其说是科学,不如说是艺术。本节描述了一个简单的三步过程,如图 2.5 所示,用于定义应用程序的架构。不过,重要的是要记住,这不是一个你可以机械地遵循的过程。 它可能是迭代的,并涉及大量的创造力。

How should we define a microservice architecture? As with any software development effort, the starting points are the written requirements, hopefully domain experts, and perhaps an existing application. Like much of software development, defining an architecture is more art than science. This section describes a simple, three-step process, shown in figure 2.5, for defining an application’s architecture. It’s important to remember, though, that it’s not a process you can follow mechanically. It’s likely to be iterative and involve a lot of creativity.

图 2.5.定义应用程序微服务架构的三步过程

应用程序的存在是为了处理请求,因此定义其体系结构的第一步是提取应用程序的需求 到关键请求中。但是,不是用特定的 IPC 技术(比如 REST 或消息传递)来描述请求, 我使用更抽象的系统操作概念。系统操作是应用程序必须处理的请求的抽象。它要么是更新数据的命令,要么是查询,用于更新 检索数据。每个命令的行为都是根据抽象域模型定义的,该模型也派生自 要求。系统操作成为说明服务如何协作的体系结构方案。

An application exists to handle requests, so the first step in defining its architecture is to distill the application’s requirements into the key requests. But instead of describing the requests in terms of specific IPC technologies such as REST or messaging, I use the more abstract notion of system operation. A system operation is an abstraction of a request that the application must handle. It’s either a command, which updates data, or a query, which retrieves data. The behavior of each command is defined in terms of an abstract domain model, which is also derived from the requirements. The system operations become the architectural scenarios that illustrate how the services collaborate.

该过程的第二步是确定分解为 services。有几种策略可供选择。 一种策略起源于业务架构学科,是定义与业务相对应的服务 能力。另一种策略是围绕领域驱动的设计子领域来组织服务。最终结果是服务 这些概念是围绕业务概念而不是技术概念组织的。

The second step in the process is to determine the decomposition into services. There are several strategies to choose from. One strategy, which has its origins in the discipline of business architecture, is to define services corresponding to business capabilities. Another strategy is to organize services around domain-driven design subdomains. The end result is services that are organized around business concepts rather than technical concepts.

定义应用程序体系结构的第三步是确定每个服务的 API。为此,您需要为每个 在服务的第一步中识别的系统操作。服务可能完全自行实现操作。或者 它可能需要与其他服务协作。在这种情况下,您可以确定服务如何协作,这通常 需要 services 来支持其他操作。您还需要决定我在第 3 章中介绍的哪些 IPC 机制来实现每个服务的 API。

The third step in defining the application’s architecture is to determine each service’s API. To do that, you assign each system operation identified in the first step to a service. A service might implement an operation entirely by itself. Alternatively, it might need to collaborate with other services. In that case, you determine how the services collaborate, which typically requires services to support additional operations. You’ll also need to decide which of the IPC mechanisms I describe in chapter 3 to implement each service’s API.

分解有几个障碍。首先是网络延迟。您可能会发现特定的分解 由于服务之间的往返次数过多,因此不切实际。分解的另一个障碍是同步通信 服务之间会降低可用性。您可能需要使用第 3 章中描述的自包含服务的概念。第三个障碍是需要跨服务保持数据一致性。您通常需要使用 saga、 在第 4 章中讨论。分解的第四个也是最后一个障碍是所谓的 god 类,它在整个应用程序中使用。幸运 您可以使用域驱动设计中的概念来消除 God 类。

There are several obstacles to decomposition. The first is network latency. You might discover that a particular decomposition would be impractical due to too many round-trips between services. Another obstacle to decomposition is that synchronous communication between services reduces availability. You might need to use the concept of self-contained services, described in chapter 3. The third obstacle is the requirement to maintain data consistency across services. You’ll typically need to use sagas, discussed in chapter 4. The fourth and final obstacle to decomposition is so-called god classes, which are used throughout an application. Fortunately, you can use concepts from domain-driven design to eliminate god classes.

本节首先介绍如何识别应用程序的操作。之后,我们将了解策略和指南 将应用程序分解为服务,以及分解的障碍以及如何解决它们。最后,我将描述 如何定义每个服务的 API。

This section first describes how to identity an application’s operations. After that, we’ll look at strategies and guidelines for decomposing an application into services, and at obstacles to decomposition and how to address them. Finally, I’ll describe how to define each service’s API.

2.2.1. 识别系统操作

2.2.1. Identifying the system operations

定义应用程序体系结构的第一步是定义系统操作。起点是应用程序的 要求,包括用户情景及其关联的用户场景(请注意,这些场景与架构 情景)。系统操作使用图 2.6 中所示的两步过程进行识别和定义。此过程的灵感来自 Craig Larman 的《应用 UML 和模式》(Prentice Hall,2004 年)一书中介绍的面向对象设计过程(有关详细信息,请参见 www.craiglarman.com/wiki/index.php?title=Book_Applying_UML_and_Patterns)。第一步创建高级域模型,该模型由关键类组成,这些类提供了用于描述系统操作的词汇表。第二步确定系统操作和 根据 Domain Model 描述每个 1 个实例的行为。

The first step in defining an application’s architecture is to define the system operations. The starting point is the application’s requirements, including user stories and their associated user scenarios (note that these are different from the architectural scenarios). The system operations are identified and defined using the two-step process shown in figure 2.6. This process is inspired by the object-oriented design process covered in Craig Larman’s book Applying UML and Patterns (Prentice Hall, 2004) (see www.craiglarman.com/wiki/index.php?title=Book_Applying_UML_and_Patterns for details). The first step creates the high-level domain model consisting of the key classes that provide a vocabulary with which to describe the system operations. The second step identifies the system operations and describes each one’s behavior in terms of the domain model.

图 2.6.系统操作是使用两步过程从应用程序的要求派生的。第一步是创建一个 高级域模型。第二步是定义系统操作,这些操作是根据域模型定义的。

领域模型主要源自用户故事的名词,系统操作主要源自 动词。您还可以使用一种称为 Event Storming 的技术来定义域模型,我在第 5 章中将讨论该技术。每个系统操作的行为都根据其对一个或多个域对象和关系的影响来描述 在他们之间。系统操作可以创建、更新或删除域对象,以及创建或销毁关系 在他们之间。

The domain model is derived primarily from the nouns of the user stories, and the system operations are derived mostly from the verbs. You could also define the domain model using a technique called Event Storming, which I talk about in chapter 5. The behavior of each system operation is described in terms of its effect on one or more domain objects and the relationships between them. A system operation can create, update, or delete domain objects, as well as create or destroy relationships between them.

让我们看看如何定义高级域模型。之后,我将根据域定义系统操作 型。

Let’s look at how to define a high-level domain model. After that I’ll define the system operations in terms of the domain model.

创建高级域模型

定义系统操作过程的第一步是为应用程序绘制高级域模型。 请注意,此域模型比最终实现的要简单得多。该应用程序甚至不会有一个 domain model 的 m.尽管这是一个极端的简化, 高级域模型在此阶段很有用,因为它定义了用于描述系统行为的词汇 操作。

The first step in the process of defining the system operations is to sketch a high-level domain model for the application. Note that this domain model is much simpler than what will ultimately be implemented. The application won’t even have a single domain model because, as you’ll soon learn, each service has its own domain model. Despite being a drastic simplification, a high-level domain model is useful at this stage because it defines the vocabulary for describing the behavior of the system operations.

域模型是使用标准技术创建的,例如分析故事和场景中的名词并与之交谈 领域专家。例如,考虑一下这个故事。我们可以将这个故事扩展到许多用户场景,包括这个场景:Place Order

A domain model is created using standard techniques such as analyzing the nouns in the stories and scenarios and talking to the domain experts. Consider, for example, the Place Order story. We can expand that story into numerous user scenarios including this one:

Given a consumer
  And a restaurant
  And a delivery address/time that can be served by that restaurant
  And an order total that meets the restaurant's order minimum
When the consumer places an order for the restaurant
Then consumer's credit card is authorized
  And an order is created in the PENDING_ACCEPTANCE state
  And the order is associated with the consumer
  And the order is associated with the restaurant
Given a consumer
  And a restaurant
  And a delivery address/time that can be served by that restaurant
  And an order total that meets the restaurant's order minimum
When the consumer places an order for the restaurant
Then consumer's credit card is authorized
  And an order is created in the PENDING_ACCEPTANCE state
  And the order is associated with the consumer
  And the order is associated with the restaurant

此用户场景中的名词暗示存在各种类,包括 、 、 和 。ConsumerOrderRestaurantCreditCard

The nouns in this user scenario hint at the existence of various classes, including Consumer, Order, Restaurant, and CreditCard.

同样,这个故事可以扩展为这样的场景:Accept Order

Similarly, the Accept Order story can be expanded into a scenario such as this one:

Given an order that is in the PENDING_ACCEPTANCE state
  and a courier that is available to deliver the order
When a restaurant accepts an order with a promise to prepare by a particular
     time
Then the state of the order is changed to ACCEPTED
  And the order's promiseByTime is updated to the promised time
  And the courier is assigned to deliver the order
Given an order that is in the PENDING_ACCEPTANCE state
  and a courier that is available to deliver the order
When a restaurant accepts an order with a promise to prepare by a particular
     time
Then the state of the order is changed to ACCEPTED
  And the order's promiseByTime is updated to the promised time
  And the courier is assigned to deliver the order

此方案表明存在 and 类。经过几次分析迭代后的最终结果将是一个域模型,不出所料,它由这些 类和其他类(如 和 )。图 2.7 是显示关键类的类图。CourierDeliveryMenuItemAddress

This scenario suggests the existence of Courier and Delivery classes. The end result after a few iterations of analysis will be a domain model that consists, unsurprisingly, of those classes and others, such as MenuItem and Address. Figure 2.7 is a class diagram that shows the key classes.

图 2.7.FTGO 域模型中的关键类

每个班级的职责如下:

The responsibilities of each class are as follows:

  • 消费者下订单的消费者。
  • ConsumerA consumer who places orders.
  • 订单消费者下的订单。它描述订单并跟踪其状态。
  • OrderAn order placed by a consumer. It describes the order and tracks its status.
  • OrderLineItem的 .Order
  • OrderLineItemA line item of an Order.
  • DeliveryInfo交付订单的时间和地点。
  • DeliveryInfoThe time and place to deliver an order.
  • 餐厅一家准备将订单交付给消费者的餐厅。
  • RestaurantA restaurant that prepares orders for delivery to consumers.
  • MenuItem - 餐厅菜单上的一个项目。
  • MenuItemAn item on the restaurant’s menu.
  • 快递将订单交付给消费者的快递员。它跟踪快递员的可用性及其当前位置。
  • CourierA courier who deliver orders to consumers. It tracks the availability of the courier and their current location.
  • 地址 - a 或 a 的地址。ConsumerRestaurant
  • AddressThe address of a Consumer or a Restaurant.
  • 位置的纬度和经度。Courier
  • LocationThe latitude and longitude of a Courier.

如图 2.7 中的类图所示,说明了应用程序体系结构的一个方面。但是,如果没有这些场景,它只不过是一幅漂亮的画面 以使其动画化。下一步是定义与架构场景相对应的系统操作。

A class diagram such as the one in figure 2.7 illustrates one aspect of an application’s architecture. But it isn’t much more than a pretty picture without the scenarios to animate it. The next step is to define the system operations, which correspond to architectural scenarios.

定义系统操作

定义高级域模型后,下一步是确定应用程序必须处理的请求。 UI 的细节超出了本书的范围,但你可以想象,在每个用户场景中,UI 将使 请求以检索和更新数据。FTGO 主要是一个 Web 应用程序,这意味着 大多数请求都是基于 HTTP 的,但某些客户端可能会使用消息传递。而不是承诺特定的 协议,因此使用更抽象的系统操作概念来表示请求是有意义的。

Once you’ve defined a high-level domain model, the next step is to identify the requests that the application must handle. The details of the UI are beyond the scope of this book, but you can imagine that in each user scenario, the UI will make requests to the backend business logic to retrieve and update data. FTGO is primarily a web application, which means that most requests are HTTP-based, but it’s possible that some clients might use messaging. Instead of committing to a specific protocol, therefore, it makes sense to use the more abstract notion of a system operation to represent requests.

有两种类型的系统操作:

There are two types of system operations:

  • 命令 - 创建、更新和删除数据的系统操作
  • CommandsSystem operations that create, update, and delete data
  • 查询 - 读取(查询)数据的系统操作
  • QueriesSystem operations that read (query) data

最终,这些系统操作将对应于 REST、RPC 或消息传递端点,但现在抽象地考虑它们 很有用。让我们首先确定一些命令。

Ultimately, these system operations will correspond to REST, RPC, or messaging endpoints, but for now thinking of them abstractly is useful. Let’s first identify some commands.

识别系统命令的一个很好的起点是分析用户情景和场景中的动词。考虑 例如,故事。它清楚地表明系统必须提供操作。许多其他 story 单独直接映射到系统命令。表 2.1 列出了一些关键的系统命令。Place OrderCreate Order

A good starting point for identifying system commands is to analyze the verbs in the user stories and scenarios. Consider, for example, the Place Order story. It clearly suggests that the system must provide a Create Order operation. Many other stories individually map directly to system commands. Table 2.1 lists some of the key system commands.

表 2.1.FTGO 应用程序的关键系统命令

演员

Actor

故事

Story

命令

Command

描述

Description

消费者 创建订单 createOrder() 创建订单
餐厅 接受订单 acceptOrder() 表示餐厅已接受订单并承诺在指定时间之前准备订单
餐厅 订单准备取货 noteOrderReadyForPickup() 表示订单已准备好取货
邮差 更新位置 noteUpdatedLocation() 更新 Courier 的当前位置
邮差 送货上门 noteDeliveryPickedUp() 表示快递员已取货
邮差 交货 noteDeliveryDelivered() 表示快递员已配送订单

命令有一个规范,该规范根据域模型类定义其参数、返回值和行为。 行为规范由调用操作时必须为 true 的前提条件和后置条件组成 ,在调用操作后为 true。例如,以下是系统操作的规范:createOrder()

A command has a specification that defines its parameters, return value, and behavior in terms of the domain model classes. The behavior specification consists of preconditions that must be true when the operation is invoked, and post-conditions that are true after the operation is invoked. Here, for example, is the specification of the createOrder() system operation:

操作 createOrder(消费者 ID、付款方式、送货地址、送货时间、餐厅 ID、订单行项目)
返回 orderId、...
前提 条件
  • 消费者存在并且可以下订单。
  • The consumer exists and can place orders.
  • 行项目对应于餐厅的菜单项。
  • The line items correspond to the restaurant’s menu items.
  • 送货地址和时间可由餐厅提供服务。
  • The delivery address and time can be serviced by the restaurant.
后置条件
  • 消费者的信用卡已获得订单总额的授权。
  • The consumer’s credit card was authorized for the order total.
  • 已创建处于 PENDING_ACCEPTANCE 状态的订单。
  • An order was created in the PENDING_ACCEPTANCE state.

前提条件反映了前面描述的用户场景中的给定。后置条件反映了场景中的 thens。调用系统操作时,它将验证前提条件并执行所需的操作 将 post-conditions 设为 true。Place Order

The preconditions mirror the givens in the Place Order user scenario described earlier. The post-conditions mirror the thens from the scenario. When a system operation is invoked it will verify the preconditions and perform the actions required to make the post-conditions true.

以下是系统操作的规范:acceptOrder()

Here’s the specification of the acceptOrder() system operation:

操作 acceptOrder(restaurantId, orderId, readyByTime)
返回
前提 条件
  • order.status 为 PENDING_ACCEPTANCE。
  • The order.status is PENDING_ACCEPTANCE.
  • 快递员可以交付订单。
  • A courier is available to deliver the order.
后置条件
  • order.status 已更改为 ACCEPTED。
  • The order.status was changed to ACCEPTED.
  • order.readyByTime 已更改为 readyByTime。
  • The order.readyByTime was changed to the readyByTime.
  • 快递员被指派递送订单。
  • The courier was assigned to deliver the order.

其前置条件和后置条件反映了前面的用户方案。

Its pre- and post-conditions mirror the user scenario from earlier.

大多数与体系结构相关的系统操作都是命令。但是,有时检索数据的查询是 也很重要。

Most of the architecturally relevant system operations are commands. Sometimes, though, queries, which retrieve data, are also important.

除了实现命令外,应用程序还必须实现查询。查询为 UI 提供信息 用户需要做出决策。在这个阶段,我们没有考虑 FTGO 应用程序的特定 UI 设计,但请考虑一下, 例如,消费者下订单时的流程:

Besides implementing commands, an application must also implement queries. The queries provide the UI with the information a user needs to make decisions. At this stage, we don’t have a particular UI design for FTGO application in mind, but consider, for example, the flow when a consumer places an order:

  1. 用户输入送货地址和时间。
  2. User enters delivery address and time.
  3. 系统显示可用的餐厅。
  4. System displays available restaurants.
  5. 用户选择 restaurant。
  6. User selects restaurant.
  7. 系统显示菜单。
  8. System displays menu.
  9. 用户选择项并签出。
  10. User selects item and checks out.
  11. 系统创建订单。
  12. System creates order.

此用户方案建议以下查询:

This user scenario suggests the following queries:

  • findAvailableRestaurants(deliveryAddress, deliveryTime)— 检索可以在指定时间送货到指定送货地址的餐厅
  • findAvailableRestaurants(deliveryAddress, deliveryTime)Retrieves the restaurants that can deliver to the specified delivery address at the specified time
  • findRestaurantMenu(id)- 检索有关餐厅的信息,包括菜单项
  • findRestaurantMenu(id)Retrieves information about a restaurant including the menu items

在这两个查询中,可能是架构上最重要的。这是一个涉及地理搜索的复杂查询。的 geosearch 组件 查询包括查找某个位置 (交货地址) 附近的所有点 (餐馆)。它还会过滤掉这些 在需要准备和取餐时关闭的餐厅。此外,性能至关重要,因为 每当消费者想要下订单时,都会执行此查询。findAvailableRestaurants()

Of the two queries, findAvailableRestaurants() is probably the most architecturally significant. It’s a complex query involving geosearch. The geosearch component of the query consists of finding all points—restaurants—that are near a location—the delivery address. It also filters out those restaurants that are closed when the order needs to be prepared and picked up. Moreover, performance is critical, because this query is executed whenever a consumer wants to place an order.

高级域模型和系统操作捕获应用程序的作用。它们有助于推动 应用程序的体系结构。每个系统操作的行为都根据域模型进行描述。每个重要的 系统操作表示体系结构上重要的方案,该方案是体系结构描述的一部分。

The high-level domain model and the system operations capture what the application does. They help drive the definition of the application’s architecture. The behavior of each system operation is described in terms of the domain model. Each important system operation represents an architecturally significant scenario that’s part of the description of the architecture.

定义系统操作后,下一步是确定应用程序的服务。如前所述, 没有一个机械的过程可以遵循。但是,您可以使用各种分解策略。每一个 从不同的角度攻击问题,并使用自己的术语。但对于所有策略,最终结果是 相同:由主要围绕业务而不是技术概念组织的服务组成的架构。

Once the system operations have been defined, the next step is to identify the application’s services. As mentioned earlier, there isn’t a mechanical process to follow. There are, however, various decomposition strategies that you can use. Each one attacks the problem from a different perspective and uses its own terminology. But with all strategies, the end result is the same: an architecture consisting of services that are primarily organized around business rather than technical concepts.

让我们看看第一个策略,它定义了与业务能力相对应的服务。

Let’s look at the first strategy, which defines services corresponding to business capabilities.

2.2.2. 通过应用 Decompose by business 功能模式来定义服务

2.2.2. Defining services by applying the Decompose by business capability pattern

创建微服务架构的一种策略是按业务能力进行分解。来自业务架构的概念 建模,业务能力是企业为了产生价值而做的事情。给定业务的功能集取决于 一种业务。例如,保险公司的能力通常包括承保、索赔管理、 计费、合规性等。在线商店的功能包括订单管理、库存管理、运输、 等等。

One strategy for creating a microservice architecture is to decompose by business capability. A concept from business architecture modeling, a business capability is something that a business does in order to generate value. The set of capabilities for a given business depends on the kind of business. For example, the capabilities of an insurance company typically include Underwriting, Claims management, Billing, Compliance, and so on. The capabilities of an online store include Order management, Inventory management, Shipping, and so on.

模式:按业务能力分解

定义与业务能力相对应的服务。请参阅 http://microservices.io/patterns/decomposition/decompose-by-business-capability.html

Define services corresponding to business capabilities. See http://microservices.io/patterns/decomposition/decompose-by-business-capability.html.

业务能力定义组织的工作

组织的业务功能捕获了组织的业务是什么。它们通常是稳定的,这与组织开展业务的方式相反,后者会随着时间的推移而变化,有时甚至会发生巨大变化。今天尤其如此,因为 快速增长的技术使用来实现许多业务流程的自动化。例如,您存款 将支票交给出纳员。然后就可以使用 ATM 存入支票。今天,您可以方便地 使用智能手机存入大多数支票。如您所见,Deposit check 业务能力保持稳定,但 它的方式已经发生了翻天覆地的变化。

An organization’s business capabilities capture what an organization’s business is. They’re generally stable, as opposed to how an organization conducts its business, which changes over time, sometimes dramatically. That’s especially true today, with the rapidly growing use of technology to automate many business processes. For example, it wasn’t that long ago that you deposited checks at your bank by handing them to a teller. It then became possible to deposit checks using an ATM. Today you can conveniently deposit most checks using your smartphone. As you can see, the Deposit check business capability has remained stable, but the manner in which it’s done has drastically changed.

识别业务能力

通过分析组织的目标、结构和业务流程来识别组织的业务能力。 每个业务功能都可以被视为一项服务,但它是面向业务的,而不是面向技术的。其规格 由各种组件组成,包括输入、输出和服务级别协议。例如,对 Insurance 的输入 承保能力是消费者的应用程序,输出包括 approval 和 price。

An organization’s business capabilities are identified by analyzing the organization’s purpose, structure, and business processes. Each business capability can be thought of as a service, except it’s business-oriented rather than technical. Its specification consists of various components, including inputs, outputs, and service-level agreements. For example, the input to an Insurance underwriting capability is the consumer’s application, and the outputs include approval and price.

业务功能通常侧重于特定的业务对象。例如,Claim 业务对象是焦点 的索赔管理功能。功能通常可以分解为子功能。例如,索赔管理 功能具有多个子功能,包括 Claim information management、Claim review 和 Claim payment management。

A business capability is often focused on a particular business object. For example, the Claim business object is the focus of the Claim management capability. A capability can often be decomposed into sub-capabilities. For example, the Claim management capability has several sub-capabilities, including Claim information management, Claim review, and Claim payment management.

不难想象,FTGO 的业务能力包括以下内容:

It is not difficult to imagine that the business capabilities for FTGO include the following:

  • 供应商管理

    • 快递员管理管理快递员信息
    • 餐厅信息管理管理餐厅菜单和其他信息,包括位置和营业时间
  • Supplier management

    • Courier managementManaging courier information
    • Restaurant information managementManaging restaurant menus and other information, including location and open hours
  • 使用者管理 - 管理有关使用者的信息
  • Consumer management—Managing information about consumers
  • 接单和履行

    • 订单管理使消费者能够创建和管理订单
    • 餐厅订单管理管理餐厅的订单准备工作
    • 后勤
    • 快递员可用性管理管理快递员对交货订单的实时可用性
    • 配送管理将订单交付给消费者
  • Order taking and fulfillment

    • Order managementEnabling consumers to create and manage orders
    • Restaurant order managementManaging the preparation of orders at a restaurant
    • Logistics
    • Courier availability managementManaging the real-time availability of couriers to delivery orders
    • Delivery managementDelivering orders to consumers
  • 会计学

    • 消费者会计管理消费者的计费
    • 餐厅会计管理向餐厅付款
    • 快递会计管理向快递员支付的款项
  • Accounting

    • Consumer accountingManaging billing of consumers
    • Restaurant accountingManaging payments to restaurants
    • Courier accountingManaging payments to couriers
  • ...
  • ...

顶级功能包括供应商管理、消费者管理、接单和履行以及会计。 可能会有许多其他顶级功能,包括与营销相关的功能。大多数顶级功能 被分解为子能力。例如,接单和履行被分解为五个子功能。

The top-level capabilities include Supplier management, Consumer management, Order taking and fulfillment, and Accounting. There will likely be many other top-level capabilities, including marketing-related capabilities. Most top-level capabilities are decomposed into sub-capabilities. For example, Order taking and fulfillment is decomposed into five sub-capabilities.

此功能层次结构的有趣之处在于,有三个与餐厅相关的功能:餐厅信息 管理、餐厅订单管理和餐厅会计。那是因为它们代表了三个截然不同的方面 的餐厅运营。

On interesting aspect of this capability hierarchy is that there are three restaurant-related capabilities: Restaurant information management, Restaurant order management, and Restaurant accounting. That’s because they represent three very different aspects of restaurant operations.

接下来,我们将了解如何使用业务功能来定义服务。

Next we’ll look at how to use business capabilities to define services.

从业务功能到服务

确定业务功能后,您可以为每个功能或相关功能组定义一个服务。图 2.8 显示了 FTGO 应用程序从功能到服务的映射。一些顶级功能,例如 Accounting 功能映射到服务。在其他情况下,子功能将映射到服务。

Once you’ve identified the business capabilities, you then define a service for each capability or group of related capabilities. Figure 2.8 shows the mapping from capabilities to services for the FTGO application. Some top-level capabilities, such as the Accounting capability, are mapped to services. In other cases, sub-capabilities are mapped to services.

图 2.8.将 FTGO 业务能力映射到服务。功能层次结构的各个级别的功能映射到服务。

决定将功能层次结构的哪个级别映射到服务,因为在某种程度上是主观的。我的理由 对于此特定映射,如下所示:

The decision of which level of the capability hierarchy to map to services, because is somewhat subjective. My justification for this particular mapping is as follows:

  • 我将 Supplier management 的子功能映射到两个服务,因为 Restaurants 和 Couriers 非常不同 供应商类型。
  • I mapped the sub-capabilities of Supplier management to two services, because Restaurants and Couriers are very different types of suppliers.
  • 我将接单和履行功能映射到三个服务,每个服务负责 过程。我结合了 Courier 可用性管理和交付管理功能,并将它们映射到单个服务 因为它们深深地交织在一起。
  • I mapped the Order taking and fulfillment capability to three services that are each responsible for different phases of the process. I combined the Courier availability management and Delivery management capabilities and mapped them to a single service because they’re deeply intertwined.
  • 我将 Accounting 功能映射到它自己的服务,因为不同类型的会计似乎很相似。
  • I mapped the Accounting capability to its own service, because the different types of accounting seem similar.

稍后,将 (Restaurants and Couriers) 的付款和 (Consumers) 的账单分开可能是有意义的。

Later on, it may make sense to separate payments (of Restaurants and Couriers) and billing (of Consumers).

围绕功能组织服务的一个关键好处是,因为它们是稳定的,所以生成的架构也将 相对稳定。架构的各个组件可能会随着业务方式的变化而发展,但架构保持不变。

A key benefit of organizing services around capabilities is that because they’re stable, the resulting architecture will also be relatively stable. The individual components of the architecture may evolve as the how aspect of the business changes, but the architecture remains unchanged.

话虽如此,重要的是要记住,图 2.8 中所示的服务只是定义架构的第一次尝试。随着我们对应用程序的了解越来越多,它们可能会随着时间的推移而发展 域。具体而言,体系结构定义过程中的一个重要步骤是调查服务如何协作 在每个关键架构服务中。例如,您可能会发现某个特定的分解效率低下 由于进程间通信过多,因此您必须合并服务。相反,服务的复杂性可能会增加 以至于将其拆分为多个服务变得值得。此外,在 2.2.5 节中,我描述了分解的几个障碍,这些障碍可能会导致您重新审视您的决定。

Having said that, it’s important to remember that the services shown in figure 2.8 are merely the first attempt at defining the architecture. They may evolve over time as we learn more about the application domain. In particular, an important step in the architecture definition process is investigating how the services collaborate in each of the key architectural services. You might, for example, discover that a particular decomposition is inefficient due to excessive interprocess communication and that you must combine services. Conversely, a service might grow in complexity to the point where it becomes worthwhile to split it into multiple services. What’s more, in section 2.2.5, I describe several obstacles to decomposition that might cause you to revisit your decision.

让我们看一下另一种分解基于域驱动设计的应用程序的方法。

Let’s take a look at another way to decompose an application that is based on domain-driven design.

2.2.3. 通过应用 Decompose by sub-domain 模式来定义服务

2.2.3. Defining services by applying the Decompose by sub-domain pattern

正如 Eric Evans 的优秀著作 Domain-driven design (Addison-Wesley Professional, 2003) 中所述,DDD 是一种方法 用于构建以开发面向对象的域模型为中心的复杂软件应用程序。域模式以可用于解决该域中的问题的形式捕获有关该域的知识。它定义了词汇 由团队使用,DDD 称之为 Ubiquitous Language。域模型在应用程序的设计和实现中密切相关。DDD 有两个概念,它们是 在应用微服务架构时非常有用:子域和边界上下文。

DDD, as described in the excellent book Domain-driven design by Eric Evans (Addison-Wesley Professional, 2003), is an approach for building complex software applications that is centered on the development of an object-oriented domain model. A domain mode captures knowledge about a domain in a form that can be used to solve problems within that domain. It defines the vocabulary used by the team, what DDD calls the Ubiquitous Language. The domain model is closely mirrored in the design and implementation of the application. DDD has two concepts that are incredibly useful when applying the microservice architecture: subdomains and bounded contexts.

模式:按子域分解

定义与 DDD 子域对应的服务。请参阅 http://microservices.io/patterns/decomposition/decompose-by-subdomain.html

Define services corresponding to DDD subdomains. See http://microservices.io/patterns/decomposition/decompose-by-subdomain.html.

DDD 与传统的企业建模方法完全不同,后者为整个企业创建单个模型。 例如,在这样的模型中,每个业务实体(如 customer、order 等)都有一个定义 上。这种建模的问题在于,让组织的不同部分就单个模型达成一致是 这是一项艰巨的任务。此外,这意味着从组织的特定部分的角度来看,该模型过于复杂 满足他们的需求。此外,域模型可能会令人困惑,因为组织的不同部分可能会使用 同一术语表示不同的概念,或不同的术语表示同一概念。DDD 通过定义多个 domain 模型,每个模型都有一个显式的 scope。

DDD is quite different than the traditional approach to enterprise modeling, which creates a single model for the entire enterprise. In such a model there would be, for example, a single definition of each business entity, such as customer, order, and so on. The problem with this kind of modeling is that getting different parts of an organization to agree on a single model is a monumental task. Also, it means that from the perspective of a given part of the organization, the model is overly complex for their needs. Moreover, the domain model can be confusing because different parts of the organization might use either the same term for different concepts or different terms for the same concept. DDD avoids these problems by defining multiple domain models, each with an explicit scope.

DDD 为每个子域定义一个单独的域模型。子域是域的一部分,是 DDD 对应用程序问题空间的术语。子域的识别方法与识别企业的方法相同 功能:分析业务并确定不同的专业领域。最终结果极有可能是子域名 类似于 Business 功能。FTGO 中的子域示例包括订单接收、订单管理、 厨房管理、交付和财务。如您所见,这些子域与业务能力非常相似 前面描述过。

DDD defines a separate domain model for each subdomain. A subdomain is a part of the domain, DDD’s term for the application’s problem space. Subdomains are identified using the same approach as identifying business capabilities: analyze the business and identify the different areas of expertise. The end result is very likely to be subdomains that are similar to the business capabilities. The examples of subdomains in FTGO include Order taking, Order management, Kitchen management, Delivery, and Financials. As you can see, these subdomains are very similar to the business capabilities described earlier.

DDD 将域模型的范围称为界定上下文。界定上下文包括实现模型的代码工件。使用微服务架构时,每个 bounded context 是一个服务,也可能是一组服务。我们可以通过应用 DDD 并定义 DDD 来创建微服务架构 每个子域的服务。图 2.9 显示了子域如何映射到服务,每个服务都有自己的域模型。

DDD calls the scope of a domain model a bounded context. A bounded context includes the code artifacts that implement the model. When using the microservice architecture, each bounded context is a service or possibly a set of services. We can create a microservice architecture by applying DDD and defining a service for each subdomain. Figure 2.9 shows how the subdomains map to services, each with its own domain model.

图 2.9.从子域到服务:FTGO 应用域的每个子域都映射到一个服务,该服务拥有自己的域 型。

DDD 和微服务架构几乎完美地结合在一起。子域和界定上下文的 DDD 概念 很好地映射到微服务架构中的服务。此外,微服务架构的自治团队概念 拥有服务与 DDD 的概念完全一致,即每个域模型都由单个团队拥有和开发。 更好的是,正如我在本节后面描述的那样,具有自己的域模型的子域的概念是消除 god 类,从而使分解更容易。

DDD and the microservice architecture are in almost perfect alignment. The DDD concept of subdomains and bounded contexts maps nicely to services within a microservice architecture. Also, the microservice architecture’s concept of autonomous teams owning services is completely aligned with the DDD’s concept of each domain model being owned and developed by a single team. Even better, as I describe later in this section, the concept of a subdomain with its own domain model is a great way to eliminate god classes and thereby make decomposition easier.

Decompose by subdomain 和 Decompose by business capability 是定义应用程序微服务的两种主要模式 建筑。但是,有一些有用的分解准则源于面向对象的设计。 让我们来看看它们。

Decompose by subdomain and Decompose by business capability are the two main patterns for defining an application’s microservice architecture. There are, however, some useful guidelines for decomposition that have their roots in object-oriented design. Let’s take a look at them.

2.2.4. 分解准则

2.2.4. Decomposition guidelines

到目前为止,在本章中,我们已经了解了定义微服务架构的主要方法。我们也可以适应并使用一对 应用微服务架构模式时面向对象设计的原则。这些原则是创建的 作者,并在他的经典著作 Designing Object Oriented C++ Applications Using The Booch Method (Prentice Hall, 1995) 中进行了介绍。第一个原则是单一责任原则 (SRP),用于定义职责 类的。第二个原则是通用闭包原则 (CCP),用于将类组织成包。让我们来看看 查看这些原则,了解如何将它们应用于微服务架构。

So far in this chapter, we’ve looked at the main ways to define a microservice architecture. We can also adapt and use a couple of principles from object-oriented design when applying the microservice architecture pattern. These principles were created by Robert C. Martin and described in his classic book Designing Object Oriented C++ Applications Using The Booch Method (Prentice Hall, 1995). The first principle is the Single Responsibility Principle (SRP), for defining the responsibilities of a class. The second principle is the Common Closure Principle (CCP), for organizing classes into packages. Let’s take a look at these principles and see how they can be applied to the microservice architecture.

单一责任原则

软件架构和设计的主要目标之一是确定每个软件元素的职责。这 单一责任原则如下:

One of the main goals of software architecture and design is determining the responsibilities of each software element. The Single Responsibility Principle is as follows:

一个类应该只有一个更改的原因。

罗伯特·马丁

A class should have only one reason to change.

Robert C. Martin

类所承担的每项责任都是该类更改的潜在原因。如果一个班级具有多个职责 独立更改,则类将不稳定。通过遵循 SRP,您可以定义每个类都有单一职责 因此,改变的原因只有一个。

Each responsibility that a class has is a potential reason for that class to change. If a class has multiple responsibilities that change independently, the class won’t be stable. By following the SRP, you define classes that each have a single responsibility and hence a single reason for change.

我们可以在定义微服务架构时应用 SRP,并创建每个服务都有单一职责的小型、有凝聚力的服务。 这将减小服务的大小并提高其稳定性。新的 FTGO 架构是 SRP 的一个例子 行动。将食物送到消费者手中的每个方面(接单、订单准备和交付)都是 单独的服务。

We can apply SRP when defining a microservice architecture and create small, cohesive services that each have a single responsibility. This will reduce the size of the services and increase their stability. The new FTGO architecture is an example of SRP in action. Each aspect of getting food to a consumer—order taking, order preparation, and delivery—is the responsibility of a separate service.

通用闭包原则

另一个有用的原则是 Common Closure 原则:

The other useful principle is the Common Closure Principle:

包中的类应该一起关闭,以防止相同类型的更改。影响包的更改会影响 该包中的所有类。

罗伯特·马丁

The classes in a package should be closed together against the same kinds of changes. A change that affects a package affects all the classes in that package.

Robert C. Martin

这个想法是,如果两个类由于相同的基本原因而同步更改,那么它们属于同一个包。 例如,这些类可能实现特定业务规则的不同方面。目标是当 业务规则更改,开发人员只需要更改少量包中的代码(理想情况下只有一个)。秉承 CCP 显著提高了应用程序的可维护性。

The idea is that if two classes change in lockstep because of the same underlying reason, then they belong in the same package. Perhaps, for example, those classes implement a different aspect of a particular business rule. The goal is that when that business rule changes, developers only need to change code in a small number of packages (ideally only one). Adhering to the CCP significantly improves the maintainability of an application.

我们可以在创建微服务架构时应用 CCP,并将由于相同原因而更改的组件打包到 相同的服务。这样做将最大限度地减少在某些需求发生变化时需要更改和部署的服务数量。理想情况下,更改只会影响 单个团队和单个服务。CCP 是分布式单体反模式的解毒剂。

We can apply CCP when creating a microservice architecture and package components that change for the same reason into the same service. Doing this will minimize the number of services that need to be changed and deployed when some requirement changes. Ideally, a change will only affect a single team and a single service. CCP is the antidote to the distributed monolith anti-pattern.

SRP 和 CCP 是 Bob Martin 制定的 11 项原则中的 2 项。它们在开发微服务时特别有用 建筑。在设计类和包时,将使用其余 9 项原则。有关 SRP 的更多信息, CCP 和其他 OOD 原则请参见 Bob Martin 网站上的文章“面向对象设计的原则”(http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod)。

SRP and CCP are 2 of the 11 principles developed by Bob Martin. They’re particularly useful when developing a microservice architecture. The remaining nine principles are used when designing classes and packages. For more information about SRP, CCP, and the other OOD principles, see the article “The Principles of Object Oriented Design” on Bob Martin’s website (http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod).

按业务能力和子域以及 SRP 和 CCP 进行分解是分解应用程序的好方法 进入服务。为了应用它们并成功开发微服务架构,您必须解决一些事务 管理和进程间通信问题。

Decomposition by business capability and by subdomain along with SRP and CCP are good techniques for decomposing an application into services. In order to apply them and successfully develop a microservice architecture, you must solve some transaction management and interprocess communication issues.

2.2.5. 将应用程序分解为 Service 的障碍

2.2.5. Obstacles to decomposing an application into services

从表面上看,通过定义业务能力对应的服务来创建微服务架构的策略 或子域看起来很简单。但是,您可能会遇到几个障碍:

On the surface, the strategy of creating a microservice architecture by defining services corresponding to business capabilities or subdomains looks straightforward. You may, however, encounter several obstacles:

  • 网络延迟
  • Network latency
  • 由于同步通信而降低可用性
  • Reduced availability due to synchronous communication
  • 维护跨服务的数据一致性
  • Maintaining data consistency across services
  • 获得一致的数据视图
  • Obtaining a consistent view of the data
  • 防止分解的神级
  • God classes preventing decomposition

让我们从网络延迟开始,看看每个障碍。

Let’s take a look at each obstacle, starting with network latency.

网络延迟

网络延迟是分布式系统中始终存在的问题。您可能会发现,将特定分解为 services 会导致 在两个服务之间的大量往返中。有时,您可以通过实施 一个批处理 API,用于在一次往返中获取多个对象。但在其他情况下,解决方案是将服务、 用语言级方法或函数调用替换昂贵的 IPC。

Network latency is an ever-present concern in a distributed system. You might discover that a particular decomposition into services results in a large number of round-trips between two services. Sometimes, you can reduce the latency to an acceptable amount by implementing a batch API for fetching multiple objects in a single round trip. But in other situations, the solution is to combine services, replacing expensive IPC with language-level method or function calls.

同步进程间通信会降低可用性

另一个问题是如何以不降低可用性的方式实现服务间通信。例如, 实现该操作的最直接方法是使用 REST 同步调用其他服务。使用像 REST 这样的协议的缺点是它减少了 的可用性。如果任何其他服务不可用,它将无法创建订单。有时这是一个值得的权衡。 但在第 3 章中,您将了解到使用异步消息传递(可以消除紧密耦合并提高可用性)通常是一种更好的 选择。createOrder()Order ServiceOrder Service

Another problem is how to implement interservice communication in a way that doesn’t reduce availability. For example, the most straightforward way to implement the createOrder() operation is for the Order Service to synchronously invoke the other services using REST. The drawback of using a protocol like REST is that it reduces the availability of the Order Service. It won’t be able to create an order if any of those other services are unavailable. Sometimes this is a worthwhile trade-off, but in chapter 3 you’ll learn that using asynchronous messaging, which eliminates tight coupling and improves availability, is often a better choice.

维护跨服务的数据一致性

另一个挑战是保持跨服务的数据一致性。部分系统操作需要多次更新数据 服务业。例如,当餐厅接受订单时,必须在 和 中进行更新。将更改 的状态。安排订单的交付。这两项更新都必须以原子方式完成。Kitchen ServiceDelivery ServiceKitchen ServiceTicketDelivery Service

Another challenge is maintaining data consistency across services. Some system operations need to update data in multiple services. For example, when a restaurant accepts an order, updates must occur in both the Kitchen Service and the Delivery Service. The Kitchen Service changes the status of the Ticket. The Delivery Service schedules delivery of the order. Both of these updates must be done atomically.

传统的解决方案是使用一个两阶段的、基于提交的分布式事务管理机制。但正如你所愿 请参阅第 4 章,对于现代应用程序来说,这不是一个好的选择,您必须使用一种非常不同的方法来进行事务管理。 一个传奇。saga 是使用消息收发进行协调的本地事务序列。Sagas 比传统的 ACID 事务更复杂 但它们在许多情况下都运行良好。传纪的一个限制是它们最终是一致的。如果您需要更新 一些数据,那么它必须驻留在单个服务中,这可能会成为分解的障碍。

The traditional solution is to use a two-phase, commit-based, distributed transaction management mechanism. But as you’ll see in chapter 4, this is not a good choice for modern applications, and you must use a very different approach to transaction management, a saga. A saga is a sequence of local transactions that are coordinated using messaging. Sagas are more complex than traditional ACID transactions but they work well in many situations. One limitation of sagas is that they are eventually consistent. If you need to update some data atomically, then it must reside within a single service, which can be an obstacle to decomposition.

获得一致的数据视图

分解的另一个障碍是无法获得跨多个数据库的真正一致的数据视图。在 整体式应用程序,则 ACID 事务的属性可保证查询将返回 数据库。相反,在微服务架构中,即使每个服务的数据库都是一致的,你也无法获取 全球一致的数据视图。如果您需要某些数据的一致视图,那么它必须驻留在单个服务中。 这可以防止分解。幸运的是,在实践中,这很少成为问题。

Another obstacle to decomposition is the inability to obtain a truly consistent view of data across multiple databases. In a monolithic application, the properties of ACID transactions guarantee that a query will return a consistent view of the database. In contrast, in a microservice architecture, even though each service’s database is consistent, you can’t obtain a globally consistent view of the data. If you need a consistent view of some data, then it must reside in a single service, which can prevent decomposition. Fortunately, in practice this is rarely a problem.

神级防止分解

分解的另一个障碍是所谓的神类的存在。God 类是在整个应用程序 (http://wiki.c2.com/?GodClass) 中使用的臃肿类。god 类通常为应用程序的许多不同方面实现业务逻辑。它通常有一个大的 映射到具有许多列的数据库表的字段数。大多数应用程序至少具有这些类中的一个,每个类 表示域的核心概念:银行业务中的客户、电子商务中的订单、保险中的保单,以及 等等。因为 god 类将应用程序许多不同方面的状态和行为捆绑在一起,所以它是一个不可逾越的 将任何使用它的业务逻辑拆分为服务的障碍。

Another obstacle to decomposition is the existence of so-called god classes. God classes are the bloated classes that are used throughout an application (http://wiki.c2.com/?GodClass). A god class typically implements business logic for many different aspects of the application. It normally has a large number of fields mapped to a database table with many columns. Most applications have at least one of these classes, each representing a concept that’s central to the domain: accounts in banking, orders in e-commerce, policies in insurance, and so on. Because a god class bundles together state and behavior for many different aspects of an application, it’s an insurmountable obstacle to splitting any business logic that uses it into services.

该类是 FTGO 应用程序中 god 类的一个很好的示例。这并不奇怪——毕竟,FTGO 的目的是 将食品订单配送给买家。系统的大部分部分都涉及订单。如果 FTGO 应用程序具有单个域 model 时,该类将是一个非常大的类。它将具有与应用程序的许多不同部分相对应的状态和行为。图 2.10 显示了使用传统建模技术创建的此类的结构。OrderOrder

The Order class is a great example of a god class in the FTGO application. That’s not surprising—after all, the purpose of FTGO is to deliver food orders to customers. Most parts of the system involve orders. If the FTGO application had a single domain model, the Order class would be a very large class. It would have state and behavior corresponding to many different parts of the application. Figure 2.10 shows the structure of this class that would be created using traditional modeling techniques.

图 2.10.神级臃肿,责任众多。Order

如您所见,该类具有对应于订单处理、餐厅订单管理、交付和付款的字段和方法。这 类也有一个复杂的状态模型,因为一个模型必须描述来自不同部分的状态转换 的应用程序。在当前形式中,此类使得将代码拆分为服务变得极其困难。Order

As you can see, the Order class has fields and methods corresponding to order processing, restaurant order management, delivery, and payments. This class also has a complex state model, due to the fact that one model has to describe state transitions from disparate parts of the application. In its current form, this class makes it extremely difficult to split code into services.

一种解决方案是将类打包到一个库中并创建一个中央数据库。处理订单的所有服务都使用此库并访问 Access 数据库。这种方法的问题 是它违反了微服务架构的关键原则之一,并导致不需要的紧密耦合。 例如,对架构的任何更改都要求团队同步更新其代码。OrderOrderOrder

One solution is to package the Order class into a library and create a central Order database. All services that process orders use this library and access the access database. The trouble with this approach is that it violates one of the key principles of the microservice architecture and results in undesirable, tight coupling. For example, any change to the Order schema requires the teams to update their code in lockstep.

另一种解决方案是将数据库封装在 中,其他服务会调用该数据库来检索和更新订单。该设计的问题在于,它将是一个数据服务,其领域模型很少或不包含业务逻辑。这些选项都没有吸引力, 但幸运的是,DDD 提供了一个解决方案。OrderOrder ServiceOrder Service

Another solution is to encapsulate the Order database in an Order Service, which is invoked by the other services to retrieve and update orders. The problem with that design is that the Order Service would be a data service with an anemic domain model containing little or no business logic. Neither of these options is appealing, but fortunately, DDD provides a solution.

更好的方法是应用 DDD 并将每个服务视为具有自己的域模型的单独子域。这意味着 FTGO 应用程序中与 orders 相关的每个服务都有自己的域模型及其版本 的班级。多域模型优势的一个很好的例子是 .它的视图 ,如图 2.11 所示,非常简单:取件地址、取件时间、送货地址和送货时间。此外,与其将其称为 ,不如使用 更合适的名称 。OrderDelivery ServiceOrderOrderDelivery ServiceDelivery

A much better approach is to apply DDD and treat each service as a separate subdomain with its own domain model. This means that each of the services in the FTGO application that has anything to do with orders has its own domain model with its version of the Order class. A great example of the benefit of multiple domain models is the Delivery Service. Its view of an Order, shown in figure 2.11, is extremely simple: pickup address, pickup time, delivery address, and delivery time. Moreover, rather than call it an Order, the Delivery Service uses the more appropriate name of Delivery.

图 2.11.域模型Delivery Service

对订单的任何其他属性不感兴趣。Delivery Service

The Delivery Service isn’t interested in any of the other attributes of an order.

订单的视图也简单得多。它的 an 版本称为 .如图 2.12 所示,a 仅由一个 status、 the 、 a 和一个告诉餐厅要准备什么的行项目列表组成。它不关心消费者、支付、交付、 等等。Kitchen ServiceOrderTicketTicketrequestedDeliveryTimeprepareByTime

The Kitchen Service also has a much simpler view of an order. Its version of an Order is called a Ticket. As figure 2.12 shows, a Ticket simply consist of a status, the requestedDeliveryTime, a prepareByTime, and a list of line items that tell the restaurant what to prepare. It’s unconcerned with the consumer, payment, delivery, and so on.

图 2.12.域模型Kitchen Service

该服务具有最复杂的 order 视图,如图 2.13 所示。尽管它有很多字段和方法,但它仍然比原始版本简单得多。Order

The Order service has the most complex view of an order, shown in figure 2.13. Even though it has quite a few fields and methods, it’s still much simpler than the original version.

图 2.13.域模型Order Service

每个域模型中的类表示同一业务实体的不同方面。FTGO 应用程序必须保持不同服务中这些不同对象之间的一致性。为 例如,一旦 授权了消费者的信用卡,它就必须触发 in 的创建 。同样,如果餐厅通过 拒绝订单,则必须在服务中取消该订单,并将客户记入计费服务中。在第 4 章中,您将学习如何使用前面提到的事件驱动机制 saga 来维护服务之间的一致性。OrderOrderOrder ServiceTicketKitchen ServiceKitchen ServiceOrder Service

The Order class in each domain model represents different aspects of the same Order business entity. The FTGO application must maintain consistency between these different objects in different services. For example, once the Order Service has authorized the consumer’s credit card, it must trigger the creation of the Ticket in the Kitchen Service. Similarly, if the restaurant rejects the order via the Kitchen Service, it must be cancelled in the Order Service service, and the customer credited in the billing service. In chapter 4, you’ll learn how to maintain consistency between services, using the previously mentioned event-driven mechanism sagas.

除了带来技术挑战外,拥有多个域模型也会影响用户体验的实现。 应用程序必须在用户体验(即自己的域模型)和每个域模型之间进行转换 服务。例如,在 FTGO 应用程序中,向使用者显示的状态来自存储在多个服务中的信息。此转换通常由 API 网关处理,如第 8 章所述。尽管存在这些挑战,但在定义微服务架构时,识别并消除 god 类至关重要。OrderOrder

As well as creating technical challenges, having multiple domain models also impacts the implementation of the user experience. An application must translate between the user experience, which is its own domain model, and the domain models of each of the services. In the FTGO application, for example, the Order status displayed to a consumer is derived from Order information stored in multiple services. This translation is often handled by the API gateway, discussed in chapter 8. Despite these challenges, it’s essential that you identify and eliminate god classes when defining a microservice architecture.

现在,我们将了解如何定义服务 API。

We’ll now look at how to define the service APIs.

2.2.6. 定义服务 API

2.2.6. Defining service APIs

到目前为止,我们有一个系统操作列表和一个潜在服务列表。下一步是定义每个服务的 API:其操作和事件。服务 API 操作存在以下两个原因之一:某些操作对应于 system 操作。它们由外部客户端调用,也可能由其他服务调用。其他操作的存在是为了支持协作 服务之间。这些操作仅由其他服务调用。

So far, we have a list of system operations and a list of a potential services. The next step is to define each service’s API: its operations and events. A service API operation exists for one of two reasons: some operations correspond to system operations. They are invoked by external clients and perhaps by other services. The other operations exist to support collaboration between services. These operations are only invoked by other services.

服务发布事件主要是为了使其能够与其他服务协作。第 4 章描述了如何使用事件来实现 saga,从而保持跨服务的数据一致性。第 7 章讨论了如何使用事件来更新支持高效查询的 CQRS 视图。应用程序还可以使用事件 通知外部客户端。例如,它可以使用 WebSockets 将事件传送到浏览器。

A service publishes events primarily to enable it to collaborate with other services. Chapter 4 describes how events can be used to implement sagas, which maintain data consistency across services. And chapter 7 discusses how events can be used to update CQRS views, which support efficient querying. An application can also use events to notify external clients. For example, it could use WebSockets to deliver events to a browser.

定义服务 API 的起点是将每个系统操作映射到服务。之后,我们决定是否 一个服务需要与其他服务协作来实现一个系统操作。如果需要合作,我们会确定 这些其他服务必须提供哪些 API 才能支持协作。首先,让我们看看如何分配 系统操作到服务。

The starting point for defining the service APIs is to map each system operation to a service. After that, we decide whether a service needs to collaborate with others to implement a system operation. If collaboration is required, we then determine what APIs those other services must provide in order to support the collaboration. Let’s begin by looking at how to assign system operations to services.

将系统操作分配给服务

第一步是确定哪个服务是请求的初始入口点。许多系统操作整齐地映射到 服务,但有时映射不太明显。例如,考虑更新快递位置的操作。一方面,因为与 courier 有关,所以应该分配此操作 到服务。另一方面,它是需要快递位置的。在这种情况下,将操作分配给需要 该操作是更好的选择。在其他情况下,将操作分配给具有处理该操作所需信息的服务可能是有意义的。noteUpdatedLocation()CourierDelivery Service

The first step is to decide which service is the initial entry point for a request. Many system operations neatly map to a service, but sometimes the mapping is less obvious. Consider, for example, the noteUpdatedLocation() operation, which updates the courier location. On one hand, because it’s related to couriers, this operation should be assigned to the Courier service. On the other hand, it’s the Delivery Service that needs the courier location. In this case, assigning an operation to a service that needs the information provided by the operation is a better choice. In other situations, it might make sense to assign an operation to the service that has the information necessary to handle it.

表 2.2 显示了 FTGO 应用程序中的哪些服务负责哪些操作。

Table 2.2 shows which services in the FTGO application are responsible for which operations.

表 2.2.将系统操作映射到 FTGO 应用程序中的服务

服务

Service

操作

Operations

消费者服务 createConsumer()
订购服务 createOrder()
餐厅服务 查找可用餐厅()
厨房服务
  • acceptOrder()
  • acceptOrder()
  • noteOrderReadyForPickup()
  • noteOrderReadyForPickup()
送货服务
  • noteUpdatedLocation()
  • noteUpdatedLocation()
  • noteDeliveryPickedUp()
  • noteDeliveryPickedUp()
  • noteDeliveryDelivered()
  • noteDeliveryDelivered()

将操作分配给服务后,下一步是确定服务如何协作以处理每个 系统操作。

After having assigned operations to services, the next step is to decide how the services collaborate in order to handle each system operation.

确定支持服务之间协作所需的 API

某些系统操作完全由单个服务处理。例如,在 FTGO 应用程序中,它完全自行处理操作。但其他系统操作跨多个服务。处理其中一项所需的数据 例如,请求可能分散在多个服务中。例如,为了实现该操作,必须调用以下服务以验证其前提条件并使后置条件变为 true:Consumer ServicecreateConsumer()createOrder()Order Service

Some system operations are handled entirely by a single service. For example, in the FTGO application, the Consumer Service handles the createConsumer() operation entirely by itself. But other system operations span multiple services. The data needed to handle one of these requests might, for instance, be scattered around multiple services. For example, in order to implement the createOrder() operation, the Order Service must invoke the following services in order to verify its preconditions and make the post-conditions become true:

  • 消费者服务验证消费者是否可以下订单并获取其付款信息。
  • Consumer ServiceVerify that the consumer can place an order and obtain their payment information.
  • 餐厅服务验证订单行项目,验证送货地址/时间是否在餐厅的服务区域内,验证订单 Minimum 的 Met,并获取 Order 行项目的价格。
  • Restaurant ServiceValidate the order line items, verify that the delivery address/time is within the restaurant’s service area, verify order minimum is met, and obtain prices for the order line items.
  • 厨房服务创建 .Ticket
  • Kitchen ServiceCreate the Ticket.
  • 会计服务授权消费者的信用卡。
  • Accounting ServiceAuthorize the consumer’s credit card.

同样,为了实现系统操作,必须调用 以安排快递员交付订单。表 2.3 显示了服务、其修订后的 API 及其协作者。为了完全定义服务 API,您需要分析 每个系统操作并确定需要哪些协作。acceptOrder()Kitchen ServiceDelivery Service

Similarly, in order to implement the acceptOrder() system operation, the Kitchen Service must invoke the Delivery Service to schedule a courier to deliver the order. Table 2.3 shows the services, their revised APIs, and their collaborators. In order to fully define the service APIs, you need to analyze each system operation and determine what collaboration is required.

表 2.3.服务、其修订后的 API 及其协作者

服务

Service

操作

Operations

合作

Collaborators

消费者服务 verifyConsumerDetails()
订购服务 createOrder()
  • 消费者服务 verifyConsumerDetails()
  • Consumer Service verifyConsumerDetails()
  • 餐厅服务 verifyOrderDetails()
  • Restaurant Service verifyOrderDetails()
  • 厨房服务 createTicket()
  • Kitchen Service createTicket()
  • 会计服务 authorizeCard()
  • Accounting Service authorizeCard()
餐厅服务
  • 查找可用餐厅()
  • findAvailableRestaurants()
  • verifyOrderDetails()
  • verifyOrderDetails()
厨房服务
  • createTicket()
  • createTicket()
  • acceptOrder()
  • acceptOrder()
  • noteOrderReadyForPickup()
  • noteOrderReadyForPickup()
  • 配送服务 scheduleDelivery()
  • Delivery Service scheduleDelivery()
送货服务
  • scheduleDelivery()
  • scheduleDelivery()
  • noteUpdatedLocation()
  • noteUpdatedLocation()
  • noteDeliveryPickedUp()
  • noteDeliveryPickedUp()
  • noteDeliveryDelivered()
  • noteDeliveryDelivered()
会计服务
  • authorizeCard()
  • authorizeCard()

到目前为止,我们已经确定了服务以及每个服务实现的操作。但重要的是要记住这一点 我们勾勒出的架构非常抽象。我们没有选择任何特定的 IPC 技术。此外,即使 术语 operation 表示某种基于同步请求 / 响应的 IPC 机制,您将看到异步消息传递起着重要的作用 角色。在本书中,我描述了影响这些服务协作方式的架构和设计概念。

So far, we’ve identified the services and the operations that each service implements. But it’s important to remember that the architecture we’ve sketched out is very abstract. We’ve not selected any specific IPC technology. Moreover, even though the term operation suggests some kind of synchronous request/response-based IPC mechanism, you’ll see that asynchronous messaging plays a significant role. Throughout this book I describe architecture and design concepts that influence how these services collaborate.

第 3 章介绍了特定的 IPC 技术,包括 REST 等同步通信机制和异步消息传递 使用消息代理。我将讨论同步通信如何影响可用性,并介绍自包含的概念 service,它不会同步调用其他服务。实现自包含服务的一种方法是使用 CQRS 模式,在第 7 章中介绍。例如,它可以维护 拥有的数据的副本,以便消除它同步调用 the 来验证订单的需要。它通过订阅 每当更新其数据时发布的事件来使副本保持最新状态。Order ServiceRestaurant ServiceRestaurant ServiceRestaurant Service

Chapter 3 describes specific IPC technologies, including synchronous communication mechanisms such as REST, and asynchronous messaging using a message broker. I discuss how synchronous communication can impact availability and introduce the concept of a self-contained service, which doesn’t invoke other services synchronously. One way to implement a self-contained service is to use the CQRS pattern, covered in chapter 7. The Order Service could, for example, maintain a replica of the data owned by the Restaurant Service in order to eliminate the need for it to synchronously invoke the Restaurant Service to validate an order. It keeps the replica up-to-date by subscribing to events published by the Restaurant Service whenever it updates its data.

第 4 章介绍了 saga 概念,以及它如何使用异步消息传递来协调参与 传奇。除了可靠地更新分散在多个服务中的数据外,saga 还是一种实现自包含服务的方法。例如,我描述 如何使用 Saga 实现操作,该 Saga 调用 、 等服务以及使用异步消息收发。createOrder()Consumer ServiceKitchen ServiceAccounting Service

Chapter 4 introduces the saga concept and how it uses asynchronous messaging for coordinating the services that participate in the saga. As well as reliably updating data scattered across multiple services, a saga is also a way to implement a self-contained service. For example, I describe how the createOrder() operation is implemented using a saga, which invokes services such as the Consumer Service, Kitchen Service, and Accounting Service using asynchronous messaging.

第 8 章介绍了 API 网关的概念,它向外部客户端公开 API。API 网关可能会实现查询 操作,而不是简单地将其路由到服务。API 网关中的逻辑通过调用多个 服务并结合结果。在这种情况下,系统操作将分配给 API 网关,而不是服务。 这些服务需要实现 API 网关所需的查询操作。

Chapter 8 describes the concept of an API gateway, which exposes an API to external clients. An API gateway might implement a query operation using the API composition pattern, described in chapter 7, rather than simply route it to the service. Logic in the API gateway gathers the data needed by the query by calling multiple services and combining the results. In this situation, the system operation is assigned to the API gateway rather than a service. The services need to implement the query operations needed by the API gateway.

总结

Summary

  • 架构决定了应用程序的维护性、可测试性和可部署性,它们直接影响开发速度。
  • Architecture determines your application’s -ilities, including maintainability, testability, and deployability, which directly impact development velocity.
  • 微服务架构是一种架构风格,可为应用程序提供高可维护性、可测试性和可部署性。
  • The microservice architecture is an architecture style that gives an application high maintainability, testability, and deployability.
  • 微服务架构中的服务是围绕业务关注点(业务功能或子域)组织的 比技术问题。
  • Services in a microservice architecture are organized around business concerns—business capabilities or subdomains—rather than technical concerns.
  • 有两种分解模式:

    • 按业务能力分解,它起源于业务架构
    • 根据域驱动设计中的概念按子域分解
  • There are two patterns for decomposition:

    • Decompose by business capability, which has its origins in business architecture
    • Decompose by subdomain, based on concepts from domain-driven design
  • 您可以通过应用 DDD 并定义 每个服务都有单独的域模型。
  • You can eliminate god classes, which cause tangled dependencies that prevent decomposition, by applying DDD and defining a separate domain model for each service.

第 3 章.微服务架构中的进程间通信

Chapter 3. Interprocess communication in a microservice architecture

本章涵盖

This chapter covers

  • 应用通信模式:远程过程调用、断路器、客户端发现、自注册、 服务器端发现、第三方注册、异步消息传递、事务发件箱、事务日志跟踪、轮询 发行人
  • Applying the communication patterns: Remote procedure invocation, Circuit breaker, Client-side discovery, Self registration, Server-side discovery, Third party registration, Asynchronous messaging, Transactional outbox, Transaction log tailing, Polling publisher
  • 微服务架构中进程间通信的重要性
  • The importance of interprocess communication in a microservice architecture
  • 定义和改进 API
  • Defining and evolving APIs
  • 各种进程间通信选项及其利弊
  • The various interprocess communication options and their trade-offs
  • 使用异步消息传递进行通信的服务的优势
  • The benefits of services that communicate using asynchronous messaging
  • 作为数据库事务的一部分可靠地发送消息
  • Reliably sending messages as part of a database transaction

与大多数其他开发人员一样,Mary 和她的团队在进程间通信 (IPC) 机制方面有一些经验。FTGO 公司 应用程序具有移动应用程序和浏览器端 JavaScript 使用的 REST API。它还使用各种云服务,例如 Twilio 消息传递服务和 Stripe 支付服务。但在像 FTGO 中,模块通过语言级方法或函数调用相互调用。FTGO 开发人员通常不需要思考 关于 IPC,除非他们正在开发 REST API 或与云服务集成的模块。

Mary and her team, like most other developers, had some experience with interprocess communication (IPC) mechanisms. The FTGO application has a REST API that’s used by mobile applications and browser-side JavaScript. It also uses various cloud services, such as the Twilio messaging service and the Stripe payment service. But within a monolithic application like FTGO, modules invoke one another via language-level method or function calls. FTGO developers generally don’t need to think about IPC unless they’re working on the REST API or the modules that integrate with cloud services.

相反,正如您在第 2 章中所看到的,微服务架构将应用程序构建为一组服务。这些服务必须经常按顺序协作 以处理请求。由于服务实例通常是在多台计算机上运行的进程,因此它们必须使用 国际工控组。它在微服务架构中的作用比在整体式应用程序中要重要得多。因此 当他们将应用程序迁移到微服务时,Mary 和其他 FTGO 开发人员将需要花费更多 是时候考虑 IPC 了。

In contrast, as you saw in chapter 2, the microservice architecture structures an application as a set of services. Those services must often collaborate in order to handle a request. Because service instances are typically processes running on multiple machines, they must interact using IPC. It plays a much more important role in a microservice architecture than it does in a monolithic application. Consequently, as they migrate their application to microservices, Mary and the rest of the FTGO developers will need to spend a lot more time thinking about IPC.

不乏可供选择的 IPC 机制。今天,流行的选择是 REST(使用 JSON)。不过,这很重要, 要记住,没有灵丹妙药。您必须仔细考虑这些选项。本章探讨了各种 IPC 选项, 包括 REST 和消息传递,并讨论了权衡。

There’s no shortage of IPC mechanisms to chose from. Today, the fashionable choice is REST (with JSON). It’s important, though, to remember that there are no silver bullets. You must carefully consider the options. This chapter explores various IPC options, including REST and messaging, and discusses the trade-offs.

IPC 机制的选择是一个重要的架构决策。它可能会影响应用程序可用性。更重要的是,由于 我将在本章和下一章中解释,IPC 甚至与事务管理相交。我赞成一个由 使用异步消息传递相互通信的松散耦合服务。同步协议(如 REST 主要用于与其他应用程序通信。

The choice of IPC mechanism is an important architectural decision. It can impact application availability. What’s more, as I explain in this chapter and the next, IPC even intersects with transaction management. I favor an architecture consisting of loosely coupled services that communicate with one another using asynchronous messaging. Synchronous protocols such as REST are used mostly to communicate with other applications.

本章开始时,我将概述微服务架构中的进程间通信。接下来,我介绍远程 基于过程调用的 IPC,其中 REST 是最常见的例子。我涵盖了包括服务发现在内的重要主题 以及如何处理部分故障。之后,我将介绍基于异步消息传递的 IPC。我还谈到了扩展使用者 同时保留消息排序、正确处理重复消息和事务性消息传递。最后,我浏览了 自包含服务的概念,它处理同步请求,而不按顺序与其他服务通信 以提高可用性。

I begin this chapter with an overview of interprocess communication in microservice architecture. Next, I describe remote procedure invocation-based IPC, of which REST is the most popular example. I cover important topics including service discovery and how to handle partial failure. After that, I describe asynchronous messaging-based IPC. I also talk about scaling consumers while preserving message ordering, correctly handling duplicate messages, and transactional messaging. Finally, I go through the concept of self-contained services that handle synchronous requests without communicating with other services in order to improve availability.

3.1. 微服务架构中的进程间通信概述

3.1. Overview of interprocess communication in a microservice architecture

有许多不同的 IPC 技术可供选择。服务可以使用基于同步请求/响应的通信 机制,例如基于 HTTP 的 REST 或 gRPC。或者,他们可以使用基于消息的异步通信机制 例如 AMQP 或 STOMP。还有各种不同的消息格式。服务可以使用人类可读的、基于文本的 格式,例如 JSON 或 XML。或者,他们可以使用更高效的二进制格式,例如 Avro 或 Protocol Buffers。

There are lots of different IPC technologies to choose from. Services can use synchronous request/response-based communication mechanisms, such as HTTP-based REST or gRPC. Alternatively, they can use asynchronous, message-based communication mechanisms such as AMQP or STOMP. There are also a variety of different messages formats. Services can use human-readable, text-based formats such as JSON or XML. Alternatively, they could use a more efficient binary format such as Avro or Protocol Buffers.

在详细介绍具体技术之前,我想提出您应该考虑的几个设计问题。我 本节首先讨论交互样式,交互样式是描述客户端和服务交互方式的一种独立于技术的方式。接下来,我将讨论其重要性 在微服务架构中精确定义 API,包括 API 优先设计的概念。之后,我讨论 API 演进的重要话题。最后,我将讨论消息格式的不同选项以及它们如何确定 API 演进的便利性。让我们从交互样式开始。

Before getting into the details of specific technologies, I want to bring up several design issues you should consider. I start this section with a discussion of interaction styles, which are a technology-independent way of describing how clients and services interact. Next I discuss the importance of precisely defining APIs in a microservice architecture, including the concept of API-first design. After that, I discuss the important topic of API evolution. Finally, I discuss different options for message formats and how they can determine ease of API evolution. Let’s begin by looking at interaction styles.

3.1.1. 交互样式

3.1.1. Interaction styles

在选择 IPC 机制之前,首先考虑服务与其客户端之间的交互方式是很有用的 以获取服务的 API。首先考虑交互方式将帮助您专注于需求并避免 陷入特定 IPC 技术的细节中。此外,如 3.4 节所述,交互风格的选择会影响应用程序的可用性。此外,正如您将在第 9 章和第 10 章中看到的那样,它可以帮助您选择合适的集成测试策略。

It’s useful to first think about the style of interaction between a service and its clients before selecting an IPC mechanism for a service’s API. Thinking first about the interaction style will help you focus on the requirements and avoid getting mired in the details of a particular IPC technology. Also, as described in section 3.4, the choice of interaction style impacts the availability of your application. Furthermore, as you’ll see in chapters 9 and 10, it helps you select the appropriate integration testing strategy.

客户端-服务交互样式多种多样。如表 3.1 所示,它们可以分为两个维度。第一个维度是交互是一对一还是一对多:

There are a variety of client-service interaction styles. As table 3.1 shows, they can be categorized in two dimensions. The first dimension is whether the interaction is one-to-one or one-to-many:

  • 一对一每个客户端请求都由一个服务处理。
  • One-to-oneEach client request is processed by exactly one service.
  • 一对多 - 每个请求都由多个服务处理。
  • One-to-manyEach request is processed by multiple services.

第二个维度是交互是同步的还是异步的:

The second dimension is whether the interaction is synchronous or asynchronous:

  • 同步 - 客户端希望服务及时响应,甚至可能在等待时阻止。
  • SynchronousThe client expects a timely response from the service and might even block while it waits.
  • 异步 - 客户端不会阻止,并且响应(如果有)不一定会立即发送。
  • AsynchronousThe client doesn’t block, and the response, if any, isn’t necessarily sent immediately.
表 3.1.各种交互方式可以分为两个维度:一对一与一对多,以及同步与异步。
 

一对一

one-to-one

一对多

one-to-many

同步 请求/响应
异步 异步请求/响应 单向通知 发布/订阅 发布/异步响应

以下是不同类型的一对一交互:

The following are the different types of one-to-one interactions:

  • 请求/响应服务客户端向服务发出请求并等待响应。客户希望及时收到响应 时尚。等待时可能会发生 block 事件。这是一种交互方式,通常会导致服务紧密地 耦合。
  • Request/responseA service client makes a request to a service and waits for a response. The client expects the response to arrive in a timely fashion. It might event block while waiting. This is an interaction style that generally results in services being tightly coupled.
  • 异步请求/响应服务客户端向服务发送请求,服务以异步方式回复。客户端在等待时不会阻塞,因为 该服务可能长时间不发送响应。
  • Asynchronous request/responseA service client sends a request to a service, which replies asynchronously. The client doesn’t block while waiting, because the service might not send the response for a long time.
  • 单向通知服务客户端向服务发送请求,但不需要或发送任何回复。
  • One-way notificationsA service client sends a request to a service, but no reply is expected or sent.

重要的是要记住,同步请求 / 响应交互风格大多与 IPC 技术正交。 例如,一个服务可以使用与 REST 或消息传递的请求/响应样式交互来与其他服务进行交互。 即使两个服务正在使用消息代理进行通信,客户端服务也可能被阻止等待响应。 这并不一定意味着它们是松散耦合的。这是我在本章后面讨论 服务间通信对可用性的影响。

It’s important to remember that the synchronous request/response interaction style is mostly orthogonal to IPC technologies. A service can, for example, interact with another service using request/response style interaction with either REST or messaging. Even if two services are communicating using a message broker, the client service might be blocked waiting for a response. It doesn’t necessarily mean they’re loosely coupled. That’s something I revisit later in this chapter when discussing the impact of inter-service communication on availability.

以下是一对多交互的不同类型的类型:

The following are the different types of one-to-many interactions:

  • 发布/订阅 - 客户端发布一条通知消息,该消息被零个或多个感兴趣的服务使用。
  • Publish/subscribeA client publishes a notification message, which is consumed by zero or more interested services.
  • 发布/异步响应客户端发布请求消息,然后等待一段时间,等待来自相关服务的响应。
  • Publish/async responsesA client publishes a request message and then waits for a certain amount of time for responses from interested services.

每项服务通常都会使用这些交互方式的组合。FTGO 应用程序中的许多服务都有 用于操作的同步和异步 API,并且许多 API 还发布事件。

Each service will typically use a combination of these interaction styles. Many of the services in the FTGO application have both synchronous and asynchronous APIs for operations, and many also publish events.

让我们看看如何定义服务的 API。

Let’s look at how to define a service’s API.

3.1.2. 在微服务架构中定义 API

3.1.2. Defining APIs in a microservice architecture

API 或接口是软件开发的核心。应用程序由模块组成。每个模块都有一个接口 它定义了 Module 的 Client 端可以调用的操作集。精心设计的界面暴露了有用的功能 同时隐藏实现。它使实施能够更改,而不会影响客户端。

APIs or interfaces are central to software development. An application is comprised of modules. Each module has an interface that defines the set of operations that module’s clients can invoke. A well-designed interface exposes useful functionality while hiding the implementation. It enables the implementation to change without impacting clients.

在整体式应用程序中,通常使用编程语言结构(如 Java 接口)指定接口。 Java 接口指定客户端可以调用的一组方法。implementation 类对 Client 端是隐藏的。 此外,由于 Java 是一种静态类型语言,如果接口更改为与客户端不兼容,则应用程序 无法编译。

In a monolithic application, an interface is typically specified using a programming language construct such as a Java interface. A Java interface specifies a set of methods that a client can invoke. The implementation class is hidden from the client. Moreover, because Java is a statically typed language, if the interface changes to be incompatible with the client, the application won’t compile.

API 和接口在微服务架构中同样重要。服务的 API 是服务之间的协定 及其客户。如第 2 章所述,服务的 API 由客户端可以调用的操作和服务发布的事件组成。操作 具有 Name、Parameters 和 Return Type。事件具有一个类型和一组字段,并且如 Section 3.3 中所述,将发布到消息通道。

APIs and interfaces are equally important in a microservice architecture. A service’s API is a contract between the service and its clients. As described in chapter 2, a service’s API consists of operations, which clients can invoke, and events, which are published by the service. An operation has a name, parameters, and a return type. An event has a type and a set of fields and is, as described in section 3.3, published to a message channel.

挑战在于,服务 API 不是使用简单的编程语言构造来定义的。根据定义,服务 并且它的客户端不是一起编译的。如果使用不兼容的 API 部署了新版本的服务,则不会进行编译 错误。相反,将出现运行时故障。

The challenge is that a service API isn’t defined using a simple programming language construct. By definition, a service and its clients aren’t compiled together. If a new version of a service is deployed with an incompatible API, there’s no compilation error. Instead, there will be runtime failures.

无论您选择哪种 IPC 机制,使用某种接口定义语言 (IDL) 精确定义服务的 API 都很重要。此外,使用 API 优先方法来定义服务也有很好的论据(有关更多信息,请参见 www.programmableweb.com/news/how-to-design-great-apis-api-first-design-and-raml/how-to/2015/07/10)。首先,编写接口定义。然后,您与客户端开发人员一起查看接口定义。 只有在迭代 API 定义后,您才会实施该服务。预先进行这种设计会增加您的机会 构建满足客户需求的服务。

Regardless of which IPC mechanism you choose, it’s important to precisely define a service’s API using some kind of interface definition language (IDL). Moreover, there are good arguments for using an API-first approach to defining services (see www.programmableweb.com/news/how-to-design-great-apis-api-first-design-and-raml/how-to/2015/07/10 for more). First you write the interface definition. Then you review the interface definition with the client developers. Only after iterating on the API definition do you then implement the service. Doing this up-front design increases your chances of building a service that meets the needs of its clients.

API 优先设计至关重要

即使在小型项目中,我也看到由于组件在 API 上不一致而出现问题。例如,在一个项目中, 后端 Java 开发人员和 AngularJS 前端开发人员都表示他们已经完成了开发。然而,该应用程序 没有奏效。前端应用程序用于与后端通信的 REST 和 WebSocket API 定义不明确。 结果,两个应用程序无法通信!

Even in small projects, I’ve seen problems occur because components don’t agree on an API. For example, on one project the backend Java developer and the AngularJS frontend developer both said they had completed development. The application, however, didn’t work. The REST and WebSocket API used by the frontend application to communicate with the backend was poorly defined. As a result, the two applications couldn’t communicate!

API 定义的性质取决于您使用的 IPC 机制。例如,如果您使用消息传递,则 API 由消息通道、消息类型和消息格式组成。如果您使用的是 HTTP,则 API 包括 URL、HTTP 动词以及请求和响应格式。在本章的后面,我将解释如何定义 API。

The nature of the API definition depends on which IPC mechanism you’re using. For example, if you’re using messaging, the API consists of the message channels, the message types, and the message formats. If you’re using HTTP, the API consists of the URLs, the HTTP verbs, and the request and response formats. Later in this chapter, I explain how to define APIs.

服务的 API 很少是一成不变的。它可能会随着时间的推移而演变。让我们来看看如何做到这一点并考虑 您将面临的问题。

A service’s API is rarely set in stone. It will likely evolve over time. Let’s take a look at how to do that and consider the issues you’ll face.

3.1.3. 不断发展的 API

3.1.3. Evolving APIs

随着新功能的添加、现有功能的更改以及(可能)旧功能的更改,API 总是会随着时间的推移而变化 删除。在整体式应用程序中,更改 API 并更新所有调用方相对简单。如果你是 使用静态类型语言,编译器通过提供编译错误列表来提供帮助。唯一的挑战可能是 更改的范围。更改广泛使用的 API 可能需要很长时间。

APIs invariably change over time as new features are added, existing features are changed, and (perhaps) old features are removed. In a monolithic application, it’s relatively straightforward to change an API and update all the callers. If you’re using a statically typed language, the compiler helps by giving a list of compilation errors. The only challenge may be the scope of the change. It might take a long time to change a widely used API.

在基于微服务的应用程序中,更改服务的 API 要困难得多。服务的客户端是其他服务, 通常由其他团队开发。客户端甚至可能是组织外部的其他应用程序。您通常 无法强制所有客户端与服务同步升级。此外,因为现代应用程序通常永远不会宕机 对于维护,您通常会执行服务的滚动升级,因此服务的旧版本和新版本都将 同时运行。

In a microservices-based application, changing a service’s API is a lot more difficult. A service’s clients are other services, which are often developed by other teams. The clients may even be other applications outside of the organization. You usually can’t force all clients to upgrade in lockstep with the service. Also, because modern applications are usually never down for maintenance, you’ll typically perform a rolling upgrade of your service, so both old and new versions of a service will be running simultaneously.

制定应对这些挑战的策略非常重要。如何处理对 API 的更改取决于 API 的性质 的变化。

It’s important to have a strategy for dealing with these challenges. How you handle a change to an API depends on the nature of the change.

使用语义版本控制

语义版本控制规范 (http://semver.org) 是 API 版本控制的有用指南。它是一组规则,用于指定如何使用和递增版本号。语义 版本控制最初旨在用于软件包的版本控制,但您可以将其用于 API 的版本控制 在分布式系统中。

The Semantic Versioning specification (http://semver.org) is a useful guide to versioning APIs. It’s a set of rules that specify how version numbers are used and incremented. Semantic versioning was originally intended to be used for versioning of software packages, but you can use it for versioning APIs in a distributed system.

语义版本控制规范 (Semvers) 要求版本号由三部分组成:.您必须按如下方式递增版本号的每个部分:MAJOR.MINOR.PATCH

The Semantic Versioning specification (Semvers) requires a version number to consist of three parts: MAJOR.MINOR.PATCH. You must increment each part of a version number as follows:

  • 主要— 当您对 API 进行不兼容的更改时
  • MAJORWhen you make an incompatible change to the API
  • 未成年人当您对 API 进行向后兼容的增强时
  • MINORWhen you make backward-compatible enhancements to the API
  • 补丁当您进行向后兼容的 bug 修复时
  • PATCHWhen you make a backward-compatible bug fix

您可以在 API 中的几个位置使用版本号。如果要实现 REST API,则可以,如前所述 下面,使用主要版本作为 URL 路径的第一个元素。或者,如果您正在实现一个使用 消息收发,您可以在它发布的消息中包含版本号。目标是正确地对 API 和 以受控的方式发展它们。让我们看看如何处理次要和主要更改。

There are a couple of places you can use the version number in an API. If you’re implementing a REST API, you can, as mentioned below, use the major version as the first element of the URL path. Alternatively, if you’re implementing a service that uses messaging, you can include the version number in the messages that it publishes. The goal is to properly version APIs and to evolve them in a controlled fashion. Let’s look at how to handle minor and major changes.

进行微小的、向后兼容的更改

理想情况下,您应该尽量只进行向后兼容的更改。向后兼容的更改是对 应用程序接口:

Ideally, you should strive to only make backward-compatible changes. Backward-compatible changes are additive changes to an API:

  • 向请求添加可选属性
  • Adding optional attributes to request
  • 向响应添加属性
  • Adding attributes to a response
  • 添加新操作
  • Adding new operations

如果您只进行这些类型的更改,则较旧的客户端将与较新的服务一起使用,前提是它们遵守 稳健性原则 (https://en.wikipedia.org/wiki/Robustness_principle),其中指出:“在你做的事情上要保守,在你从别人那里接受的东西要自由。服务应提供默认 缺少请求属性的值。同样,客户端应忽略任何额外的响应属性。为了让这个 是无痛的,客户端和服务必须使用支持 Robustness 原则的请求和响应格式。后来 在本节中,我将介绍基于文本的格式(如 JSON 和 XML)通常如何使 API 的发展变得更加容易。

If you only ever make these kinds of changes, older clients will work with newer services, provided that they observe the Robustness principle (https://en.wikipedia.org/wiki/Robustness_principle), which states: “Be conservative in what you do, be liberal in what you accept from others.” Services should provide default values for missing request attributes. Similarly, clients should ignore any extra response attributes. In order for this to be painless, clients and services must use a request and response format that supports the Robustness principle. Later in this section, I describe how text-based formats such as JSON and XML generally make it easier to evolve APIs.

进行重大的重大重大更改

有时,您必须对 API 进行重大的、不兼容的更改。由于您无法强制客户端立即升级,因此服务 必须在一段时间内同时支持旧版本和新版本的 API。如果您使用的是基于 HTTP 的 IPC 机制, 例如 REST,一种方法是在 URL 中嵌入主要版本号。例如,版本 1 路径的前缀为 ,版本 2 路径的前缀为 。'/v1/...''/v2/...'

Sometimes you must make major, incompatible changes to an API. Because you can’t force clients to upgrade immediately, a service must simultaneously support old and new versions of an API for some period of time. If you’re using an HTTP-based IPC mechanism, such as REST, one approach is to embed the major version number in the URL. For example, version 1 paths are prefixed with '/v1/...', and version 2 paths with '/v2/...'.

另一种选择是使用 HTTP 的内容协商机制,并在 MIME 类型中包含版本号。例如 客户端将使用如下所示的请求请求 的版本:1.xOrder

Another option is to use HTTP’s content negotiation mechanism and include the version number in the MIME type. For example, a client would request version 1.x of an Order using a request like this:

GET /orders/xyz HTTP/1.1
Accept: application/vnd.example.resource+json; version=1
...
GET /orders/xyz HTTP/1.1
Accept: application/vnd.example.resource+json; version=1
...

此请求告诉 客户端需要版本响应。Order Service1.x

This request tells the Order Service that the client expects a version 1.x response.

为了支持 API 的多个版本,实现 API 的服务适配器将包含将 在旧版本和新版本之间。此外,如第 8 章所述,API 网关几乎肯定会使用版本化的 API。它甚至可能必须支持许多旧版本的 API。

In order to support multiple versions of an API, the service’s adapters that implement the APIs will contain logic that translates between the old and new versions. Also, as described in chapter 8, the API gateway will almost certainly use versioned APIs. It may even have to support numerous older versions of an API.

现在,我们将研究消息格式的问题,选择哪种格式会影响 API 发展的难易程度。

Now we’ll look at the issue of message formats, the choice of which can impact how easy evolving an API will be.

3.1.4. 消息格式

3.1.4. Message formats

IPC 的本质是信息交换。消息通常包含数据,因此一个重要的设计决策是该数据的格式。消息格式的选择可能会影响 IPC 的效率、API 的可用性及其可发展性。如果您使用的是消息传递系统或协议,例如 作为 HTTP,您可以选择消息格式。一些 IPC 机制(例如 gRPC,您稍后将了解)可能会规定 消息格式。无论哪种情况,都必须使用跨语言消息格式。即使您正在编写微服务 在今天的单一语言中,您将来可能会使用其他语言。例如,您不应该使用 Java 序列化。

The essence of IPC is the exchange of messages. Messages usually contain data, and so an important design decision is the format of that data. The choice of message format can impact the efficiency of IPC, the usability of the API, and its evolvability. If you’re using a messaging system or protocols such as HTTP, you get to pick your message format. Some IPC mechanisms—such as gRPC, which you’ll learn about shortly—might dictate the message format. In either case, it’s essential to use a cross-language message format. Even if you’re writing your microservices in a single language today, it’s likely that you’ll use other languages in the future. You shouldn’t, for example, use Java serialization.

消息格式主要分为两类:文本和二进制。让我们看看每一个。

There are two main categories of message formats: text and binary. Let’s look at each one.

基于文本的消息格式

第一类是基于文本的格式,例如 JSON 和 XML。这些格式的一个优点是它们不仅是人类的 可读的,它们是自描述的。JSON 消息是命名属性的集合。同样,XML 消息实际上是 命名元素和值的集合。此格式使消息的使用者能够挑选出感兴趣的值 并忽略其余部分。因此,对消息架构的许多更改可以很容易地向后兼容。

The first category is text-based formats such as JSON and XML. An advantage of these formats is that not only are they human readable, they’re self describing. A JSON message is a collection of named properties. Similarly, an XML message is effectively a collection of named elements and values. This format enables a consumer of a message to pick out the values of interest and ignore the rest. Consequently, many changes to the message schema can easily be backward-compatible.

XML 文档的结构由 XML 架构 (www.w3.org/XML/Schema) 指定。随着时间的推移,开发人员社区已经意识到 JSON 也需要类似的机制。一个流行的选项是 以使用 JSON 架构标准 (http://json-schema.org)。JSON 架构定义消息属性的名称和类型,以及它们是可选的还是必需的。以及 作为有用的文档,应用程序可以使用 JSON 架构来验证传入的消息。

The structure of XML documents is specified by an XML schema (www.w3.org/XML/Schema). Over time, the developer community has come to realize that JSON also needs a similar mechanism. One popular option is to use the JSON Schema standard (http://json-schema.org). A JSON schema defines the names and types of a message’s properties and whether they’re optional or required. As well as being useful documentation, a JSON schema can be used by an application to validate incoming messages.

使用基于文本的消息格式的一个缺点是消息往往很冗长,尤其是 XML。每条消息都有 除了属性的值之外,还包含属性名称的开销。另一个缺点是解析文本的开销,尤其是当消息 大。因此,如果效率和性能很重要,则可能需要考虑使用二进制格式。

A downside of using a text-based messages format is that the messages tend to be verbose, especially XML. Every message has the overhead of containing the names of the attributes in addition to their values. Another drawback is the overhead of parsing text, especially when messages are large. Consequently, if efficiency and performance are important, you may want to consider using a binary format.

二进制消息格式

有几种不同的二进制格式可供选择。常用格式包括 Protocol Buffers (https://developers.google.com/protocol-buffers/docs/overview) 和 Avro (https://avro.apache.org)。这两种格式都提供了用于定义消息结构的类型化 IDL。然后,编译器生成序列化 并反序列化消息。您被迫采用 API 优先的方法进行服务设计!此外,如果你编写 client 时,编译器会检查它是否正确使用了 API。

There are several different binary formats to choose from. Popular formats include Protocol Buffers (https://developers.google.com/protocol-buffers/docs/overview) and Avro (https://avro.apache.org). Both formats provide a typed IDL for defining the structure of your messages. A compiler then generates the code that serializes and deserializes the messages. You’re forced to take an API-first approach to service design! Moreover, if you write your client in a statically typed language, the compiler checks that it uses the API correctly.

这两种二进制格式之间的一个区别是 Protocol Buffers 使用标记字段,而 Avro 消费者需要 了解 schema 以便解释消息。因此,使用 Protocol Buffers 处理 API 演化比 与 Avro 合作。这篇博文 (http://martin.kleppmann.com/2012/12/05/schema-evolution-in-avro-protocol-buffers-thrift.html) 是对 Thrift、Protocol Buffers 和 Avro 的出色比较。

One difference between these two binary formats is that Protocol Buffers uses tagged fields, whereas an Avro consumer needs to know the schema in order to interpret messages. As a result, handling API evolution is easier with Protocol Buffers than with Avro. This blog post (http://martin.kleppmann.com/2012/12/05/schema-evolution-in-avro-protocol-buffers-thrift.html) is an excellent comparison of Thrift, Protocol Buffers, and Avro.

现在我们已经了解了消息格式,让我们看看传输消息的特定 IPC 机制,从 远程过程调用 (RPI) 模式。

Now that we’ve looked at message formats, let’s look at specific IPC mechanisms that transport the messages, starting with the Remote procedure invocation (RPI) pattern.

3.2. 使用同步 Remote 过程调用模式进行通信

3.2. Communicating using the synchronous Remote procedure invocation pattern

当使用基于远程过程调用的 IPC 机制时,客户端向服务发送请求,服务处理 请求并发回响应。一些客户端可能会阻塞等待响应,而其他客户端可能具有反应式非阻塞 建筑。但与使用消息传递时不同的是,客户端假定响应将及时到达。

When using a remote procedure invocation-based IPC mechanism, a client sends a request to a service, and the service processes the request and sends back a response. Some clients may block waiting for a response, and others might have a reactive, nonblocking architecture. But unlike when using messaging, the client assumes that the response will arrive in a timely fashion.

图 3.1 显示了 RPI 的工作原理。客户端中的业务逻辑调用由 RPI 代理适配器类实现的代理接口RPI 代理向服务发出请求。该请求由 RPI 服务器适配器类处理,该类通过接口调用服务的业务逻辑。然后,它将回复发回给 RPI 代理,后者将结果返回给客户端的业务逻辑。

Figure 3.1 shows how RPI works. The business logic in the client invokes a proxy interface, implemented by an RPI proxy adapter class. The RPI proxy makes a request to the service. The request is handled by an RPI server adapter class, which invokes the service’s business logic via an interface. It then sends back a reply to the RPI proxy, which returns the result to the client’s business logic.

模式:远程过程调用

客户端使用基于同步远程过程调用的协议(如 REST (http://microservices.io/patterns/communication-style/messaging.html))调用服务。

A client invokes a service using a synchronous, remote procedure invocation-based protocol, such as REST (http://microservices.io/patterns/communication-style/messaging.html).

图 3.1.客户端的业务逻辑调用由 RPI 代理适配器类实现的接口。RPI 代理类向服务发出请求。RPI 服务器适配器类通过调用服务的业务逻辑来处理请求。

代理接口通常封装底层通信协议。有许多协议可供选择。在本节中, 我将介绍 REST 和 gRPC。我将介绍如何通过正确处理部分故障来提高服务的可用性,并解释为什么基于微服务的 使用 RPI 的应用程序必须使用服务发现机制。

The proxy interface usually encapsulates the underlying communication protocol. There are numerous protocols to choose from. In this section, I describe REST and gRPC. I cover how to improve the availability of your services by properly handling partial failure and explain why a microservices-based application that uses RPI must use a service discovery mechanism.

我们首先看一下 REST。

Let’s first take a look at REST.

3.2.1. 使用 REST

3.2.1. Using REST

如今,以 RESTful 风格 (https://en.wikipedia.org/wiki/Representational_state_transfer) 开发 API 已成为一种时尚。REST 是一种 IPC 机制(几乎总是)使用 HTTP。REST 的创建者 Roy Fielding 对 REST 的定义如下:

Today, it’s fashionable to develop APIs in the RESTful style (https://en.wikipedia.org/wiki/Representational_state_transfer). REST is an IPC mechanism that (almost always) uses HTTP. Roy Fielding, the creator of REST, defines REST as follows:

REST 提供了一组架构约束,当作为一个整体应用时,它们强调了组件交互的可扩展性。 接口通用性,组件独立部署,中间组件减少交互延迟, 实施安全性,并封装旧系统。

www.ics.uci.edu/~fielding/pubs/dissertation/top.htm

REST provides a set of architectural constraints that, when applied as a whole, emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems.

www.ics.uci.edu/~fielding/pubs/dissertation/top.htm

REST 中的一个关键概念是资源,它通常表示单个业务对象(如 Customer 或 Product)或业务对象的集合。 REST 使用 HTTP 动词来操作资源,这些资源通过 URL 引用。例如,GET 请求返回 资源的表示形式,通常采用 XML 文档或 JSON 对象的形式,尽管其他格式(如 作为二进制文件。POST 请求创建新资源,PUT 请求更新资源。例如,有一个用于创建 的终端节点和一个用于检索 .Order ServicePOST /ordersOrderGET /orders/{orderId}Order

A key concept in REST is a resource, which typically represents a single business object, such as a Customer or Product, or a collection of business objects. REST uses the HTTP verbs for manipulating resources, which are referenced using a URL. For example, a GET request returns the representation of a resource, which is often in the form of an XML document or JSON object, although other formats such as binary can be used. A POST request creates a new resource, and a PUT request updates a resource. The Order Service, for example, has a POST /orders endpoint for creating an Order and a GET /orders/{orderId} endpoint for retrieving an Order.

许多开发人员声称他们基于 HTTP 的 API 是 RESTful 的。但正如 Roy Fielding 在一篇博文中所描述的那样,实际上并不是所有的 are (http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven) 的为了理解原因,让我们看一下 REST 成熟度模型。

Many developers claim their HTTP-based APIs are RESTful. But as Roy Fielding describes in a blog post, not all of them actually are (http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven). To understand why, let’s take a look at the REST maturity model.

REST 成熟度模型

Leonard Richardson(与你的作者无关)为 REST (http://martinfowler.com/articles/richardsonMaturityModel.html) 定义了一个非常有用的成熟度模型,它由以下级别组成:

Leonard Richardson (no relation to your author) defines a very useful maturity model for REST (http://martinfowler.com/articles/richardsonMaturityModel.html) that consists of the following levels:

  • 级别 0级别 0 服务的客户端通过向其唯一的 URL 终端节点发出 HTTP POST 请求来调用服务。每个请求指定 要执行的操作、操作的目标 (例如,业务对象) 和任何参数。
  • Level 0Clients of a level 0 service invoke the service by making HTTP POST requests to its sole URL endpoint. Each request specifies the action to perform, the target of the action (for example, the business object), and any parameters.
  • 1 级级别 1 服务支持资源的概念。要对资源执行操作,客户端会发出一个 POST 请求,该请求指定 要执行的操作和任何参数。
  • Level 1A level 1 service supports the idea of resources. To perform an action on a resource, a client makes a POST request that specifies the action to perform and any parameters.
  • 2 级2 级服务使用 HTTP 动词来执行操作:GET 检索、POST 创建 和 PUT 更新。请求查询 parameters 和 body(如果有)指定操作的参数。这使服务能够使用 Web 基础架构,例如缓存 用于 GET 请求。
  • Level 2A level 2 service uses HTTP verbs to perform actions: GET to retrieve, POST to create, and PUT to update. The request query parameters and body, if any, specify the actions’ parameters. This enables services to use web infrastructure such as caching for GET requests.
  • 3 级3 级服务的设计基于名称非常响亮的 HATEOAS(超文本作为应用程序状态的引擎)原则。 基本思想是 GET 请求返回的资源的表示形式包含用于执行操作的链接 那个资源。例如,客户端可以使用 GET 请求返回的表示中的链接取消订单,该链接 检索到订单。HATEOAS 的好处包括不再需要将 URL 硬连接到客户端代码 (www.infoq.com/news/2009/04/hateoas-restful-api-advantages)。
  • Level 3The design of a level 3 service is based on the terribly named HATEOAS (Hypertext As The Engine Of Application State) principle. The basic idea is that the representation of a resource returned by a GET request contains links for performing actions on that resource. For example, a client can cancel an order using a link in the representation returned by the GET request that retrieved the order. The benefits of HATEOAS include no longer having to hard-wire URLs into client code (www.infoq.com/news/2009/04/hateoas-restful-api-advantages).

我鼓励您查看您组织中的 REST API,以了解它们对应于哪个级别。

I encourage you to review the REST APIs at your organization to see which level they correspond to.

指定 REST API

如前面的第 3.1 节所述,您必须使用接口定义语言 (IDL) 定义 API。与 CORBA 等旧通信协议不同 和 SOAP 中,REST 最初没有 IDL。幸运的是,开发人员社区已经重新发现了 IDL 的价值 RESTful API 的 API 中。最流行的 REST IDL 是开放 API 规范 (www.openapis.org),它从 Swagger 开源项目演变而来。Swagger 项目是一组用于开发和记录的工具 REST API 的 API 中。它包括从接口定义生成客户端存根和服务器框架的工具。

As mentioned earlier in section 3.1, you must define your APIs using an interface definition language (IDL). Unlike older communication protocols like CORBA and SOAP, REST did not originally have an IDL. Fortunately, the developer community has rediscovered the value of an IDL for RESTful APIs. The most popular REST IDL is the Open API Specification (www.openapis.org), which evolved from the Swagger open source project. The Swagger project is a set of tools for developing and documenting REST APIs. It includes tools that generate client stubs and server skeletons from an interface definition.

在单个请求中获取多个资源的挑战

REST 资源通常以业务对象为导向,例如 和 。因此,在设计 REST API 时,一个常见问题是如何使客户端能够在单个请求中检索多个相关对象。例如,假设 REST 客户端需要 检索 an 和 的 .纯 REST API 要求客户端至少发出两个请求,一个用于 ,另一个用于其 。更复杂的场景需要更多的往返,并且延迟过长。ConsumerOrderOrderOrderConsumerOrderConsumer

REST resources are usually oriented around business objects, such as Consumer and Order. Consequently, a common problem when designing a REST API is how to enable the client to retrieve multiple related objects in a single request. For example, imagine that a REST client wanted to retrieve an Order and the Order’s Consumer. A pure REST API would require the client to make at least two requests, one for the Order and another for its Consumer. A more complex scenario would require even more round-trips and suffer from excessive latency.

此问题的一种解决方案是 API 允许客户端在获取资源时检索相关资源。为 例如,客户端可以使用 .query 参数指定要与 .这种方法在许多情况下都适用,但对于更复杂的场景来说,它通常不够。它也有可能 实施起来很耗时。这导致替代 API 技术越来越受欢迎,例如 GraphQL (http://graphql.org) 和 Netflix Falcor (http://netflix.github.io/falcor/),这些技术旨在支持高效的数据获取。OrderConsumerGET /orders/order-id-1345?expand=consumerOrder

One solution to this problem is for an API to allow the client to retrieve related resources when it gets a resource. For example, a client could retrieve an Order and its Consumer using GET /orders/order-id-1345?expand=consumer. The query parameter specifies the related resources to return with the Order. This approach works well in many scenarios but it’s often insufficient for more complex scenarios. It’s also potentially time consuming to implement. This has led to the increasing popularity of alternative API technologies such as GraphQL (http://graphql.org) and Netflix Falcor (http://netflix.github.io/falcor/), which are designed to support efficient data fetching.

将操作映射到 HTTP 动词的挑战

另一个常见的 REST API 设计问题是如何将要对业务对象执行的操作映射到 HTTP 动词。 REST API 应使用 PUT 进行更新,但可能有多种方法可以更新订单,包括取消订单、修改订单 订单,依此类推。此外,更新可能不是幂等的,这是使用 PUT 的要求。一种解决方案是定义 用于更新资源的特定方面的子资源。例如,具有一个用于取消订单的终端节点和一个用于修改订单的终端节点。另一种解决方案是将动词指定为 URL 查询参数。遗憾的是,这两种解决方案都不是 特别是 RESTful。Order ServicePOST /orders/{orderId}/cancelPOST /orders/{orderId}/revise

Another common REST API design problem is how to map the operations you want to perform on a business object to an HTTP verb. A REST API should use PUT for updates, but there may be multiple ways to update an order, including cancelling it, revising the order, and so on. Also, an update might not be idempotent, which is a requirement for using PUT. One solution is to define a sub-resource for updating a particular aspect of a resource. The Order Service, for example, has a POST /orders/{orderId}/cancel endpoint for cancelling orders, and a POST /orders/{orderId}/revise endpoint for revising orders. Another solution is to specify a verb as a URL query parameter. Sadly, neither solution is particularly RESTful.

将操作映射到 HTTP 动词的这一问题导致了 REST 的替代方案越来越受欢迎,例如 gPRC、 稍后将在 3.2.2 节中讨论。但首先让我们看看 REST 的优点和缺点。

This problem with mapping operations to HTTP verbs has led to the growing popularity of alternatives to REST, such as gPRC, discussed shortly in section 3.2.2. But first let’s look at the benefits and drawbacks of REST.

REST 的优点和缺点

使用 REST 有很多好处:

There are numerous benefits to using REST:

  • 它简单而熟悉。
  • It’s simple and familiar.
  • 例如,您可以使用 Postman 插件从浏览器中测试 HTTP API,也可以使用 curl 从命令行测试 HTTP API (假设使用 JSON 或其他一些文本格式)。
  • You can test an HTTP API from within a browser using, for example, the Postman plugin, or from the command line using curl (assuming JSON or some other text format is used).
  • 它直接支持请求 / 响应样式的通信。
  • It directly supports request/response style communication.
  • 当然,HTTP 对防火墙友好。
  • HTTP is, of course, firewall friendly.
  • 它不需要中间代理,从而简化了系统的架构。
  • It doesn’t require an intermediate broker, which simplifies the system’s architecture.

使用 REST 有一些缺点:

There are some drawbacks to using REST:

  • 它仅支持 request/response 风格的通信。
  • It only supports the request/response style of communication.
  • 可用性降低。由于客户端和服务直接通信,无需中介来缓冲消息,因此它们 都必须在 Exchange 期间运行。
  • Reduced availability. Because the client and service communicate directly without an intermediary to buffer messages, they must both be running for the duration of the exchange.
  • 客户端必须知道服务实例的位置 (URL)。如 3.2.4 节所述,这是现代应用程序中的一个重要问题。客户端必须使用所谓的服务发现机制来查找服务实例。
  • Clients must know the locations (URLs) of the service instances(s). As described in section 3.2.4, this is a nontrivial problem in a modern application. Clients must use what is known as a service discovery mechanism to locate service instances.
  • 在单个请求中获取多个资源具有挑战性。
  • Fetching multiple resources in a single request is challenging.
  • 有时很难将多个更新操作映射到 HTTP 谓词。
  • It’s sometimes difficult to map multiple update operations to HTTP verbs.

尽管存在这些缺点,REST 似乎是 API 的事实标准,尽管有几个有趣的替代方案。 例如,GraphQL 实现了灵活、高效的数据获取。第 8 章讨论了 GraphQL 并介绍了 API 网关模式。

Despite these drawbacks, REST seems to be the de facto standard for APIs, though there are a couple of interesting alternatives. GraphQL, for example, implements flexible, efficient data fetching. Chapter 8 discusses GraphQL and covers the API gateway pattern.

gRPC 是 REST 的另一种替代方案。让我们来看看它是如何工作的。

gRPC is another alternative to REST. Let’s take a look at how it works.

3.2.2. 使用 gRPC

3.2.2. Using gRPC

如上一节所述,使用 REST 的一个挑战是,由于 HTTP 仅提供有限数量的 动词,设计支持多个更新操作的 REST API 并不总是那么简单。一种 IPC 技术 避免此问题的是 gRPC (www.grpc.io),这是一个用于编写跨语言客户端和服务器的框架(有关详细信息,请参阅 https://en.wikipedia.org/wiki/Remote_procedure_call)。gRPC 是基于二进制消息的协议,这意味着 — 如前面讨论二进制消息中所述 格式 — 您被迫采用 API 优先的方法进行服务设计。您可以使用基于 Protocol Buffers 的 IDL,这是 Google 用于序列化结构化数据的语言中立机制。您可以使用 Protocol Buffer 编译器 生成客户端存根和服务器端框架。编译器可以为多种语言生成代码,包括 Java、C#、NodeJS 和 GoLang。客户端和服务器使用 HTTP/2 以 Protocol Buffers 格式交换二进制消息。

As mentioned in the preceding section, one challenge with using REST is that because HTTP only provides a limited number of verbs, it’s not always straightforward to design a REST API that supports multiple update operations. An IPC technology that avoids this issue is gRPC (www.grpc.io), a framework for writing cross-language clients and servers (see https://en.wikipedia.org/wiki/Remote_procedure_call for more). gRPC is a binary message-based protocol, and this means—as mentioned earlier in the discussion of binary message formats—you’re forced to take an API-first approach to service design. You define your gRPC APIs using a Protocol Buffers-based IDL, which is Google’s language-neutral mechanism for serializing structured data. You use the Protocol Buffer compiler to generate client-side stubs and server-side skeletons. The compiler can generate code for a variety of languages, including Java, C#, NodeJS, and GoLang. Clients and servers exchange binary messages in the Protocol Buffers format using HTTP/2.

gRPC API 由一个或多个服务和请求/响应消息定义组成。服务定义类似于 Java 接口,是强类型方法的集合。以及支持简单的请求/响应 RPC、gRPC 支持流式处理 RPC。服务器可以使用消息流向客户端进行回复。或者,客户端可以发送 发送到服务器的消息流。

A gRPC API consists of one or more services and request/response message definitions. A service definition is analogous to a Java interface and is a collection of strongly typed methods. As well as supporting simple request/response RPC, gRPC support streaming RPC. A server can reply with a stream of messages to the client. Alternatively, a client can send a stream of messages to the server.

gRPC 使用 Protocol Buffers 作为消息格式。如前所述,Protocol Buffers 是一个高效、紧凑的二进制文件 格式。这是一种标记格式。Protocol Buffers 消息的每个字段都有编号,并且有一个类型代码。消息接收者 可以提取它需要的字段并跳过它无法识别的字段。因此,gRPC 使 API 能够 在保持向后兼容性的同时不断发展。

gRPC uses Protocol Buffers as the message format. Protocol Buffers is, as mentioned earlier, an efficient, compact, binary format. It’s a tagged format. Each field of a Protocol Buffers message is numbered and has a type code. A message recipient can extract the fields that it needs and skip over the fields that it doesn’t recognize. As a result, gRPC enables APIs to evolve while remaining backward-compatible.

清单 3.1 显示了 .它定义了多种方法,包括 .此方法将 a 作为参数并返回一个 .Order ServicecreateOrder()CreateOrderRequestCreateOrderReply

Listing 3.1 shows an excerpt of the gRPC API for the Order Service. It defines several methods, including createOrder(). This method takes a CreateOrderRequest as a parameter and returns a CreateOrderReply.

清单 3.1.gRPC API 的摘录Order Service
service OrderService {
  rpc createOrder(CreateOrderRequest) returns (CreateOrderReply) {}
  rpc cancelOrder(CancelOrderRequest) returns (CancelOrderReply) {}
  rpc reviseOrder(ReviseOrderRequest) returns (ReviseOrderReply) {}
  ...
}

message CreateOrderRequest {
  int64 restaurantId = 1;
  int64 consumerId = 2;
  repeated LineItem lineItems = 3;
  ...
}

message LineItem {
  string menuItemId = 1;
  int32 quantity = 2;
}


message CreateOrderReply {
  int64 orderId = 1;
}
...
service OrderService {
  rpc createOrder(CreateOrderRequest) returns (CreateOrderReply) {}
  rpc cancelOrder(CancelOrderRequest) returns (CancelOrderReply) {}
  rpc reviseOrder(ReviseOrderRequest) returns (ReviseOrderReply) {}
  ...
}

message CreateOrderRequest {
  int64 restaurantId = 1;
  int64 consumerId = 2;
  repeated LineItem lineItems = 3;
  ...
}

message LineItem {
  string menuItemId = 1;
  int32 quantity = 2;
}


message CreateOrderReply {
  int64 orderId = 1;
}
...

CreateOrderRequest和 键入的消息。例如,message 具有 类型为 的字段。字段的 tag 值为 1。CreateOrderReplyCreateOrderRequestrestaurantIdint64

CreateOrderRequest and CreateOrderReply are typed messages. For example, CreateOrderRequest message has a restaurantId field of type int64. The field’s tag value is 1.

gRPC 有几个好处:

gRPC has several benefits:

  • 设计具有丰富更新操作集的 API 非常简单。
  • It’s straightforward to design an API that has a rich set of update operations.
  • 它具有高效、紧凑的 IPC 机制,尤其是在交换大型消息时。
  • It has an efficient, compact IPC mechanism, especially when exchanging large messages.
  • 双向流式处理支持 RPI 和消息传递风格的通信。
  • Bidirectional streaming enables both RPI and messaging styles of communication.
  • 它支持客户端与以多种语言编写的服务之间的互操作性。
  • It enables interoperability between clients and services written in a wide range of languages.

gRPC 也有几个缺点:

gRPC also has several drawbacks:

  • 与基于 REST/JSON 的 API 相比,JavaScript 客户端使用基于 gRPC 的 API 需要更多的工作。
  • It takes more work for JavaScript clients to consume gRPC-based API than REST/JSON-based APIs.
  • 较旧的防火墙可能不支持 HTTP/2。
  • Older firewalls might not support HTTP/2.

gRPC 是 REST 的一个引人注目的替代方案,但与 REST 一样,它是一种同步通信机制,因此它也受到 部分失败的问题。让我们来看看这是什么以及如何处理它。

gRPC is a compelling alternative to REST, but like REST, it’s a synchronous communication mechanism, so it also suffers from the problem of partial failure. Let’s take a look at what that is and how to handle it.

3.2.3. 使用 Circuit breaker 模式处理部分故障

3.2.3. Handling partial failure using the Circuit breaker pattern

在分布式系统中,每当一个服务向另一个服务发出同步请求时,都存在始终存在的风险 部分失败。由于 Client 端和服务是独立的进程,因此 Service 可能无法及时响应 方式来响应客户端的请求。服务可能由于故障或维护而关闭。或者服务可能过载 以及对请求的响应速度极慢。由于客户端被阻止等待响应,因此危险在于故障可能会级联到客户端的客户端 等等,并导致中断。

In a distributed system, whenever a service makes a synchronous request to another service, there is an ever-present risk of partial failure. Because the client and the service are separate processes, a service may not be able to respond in a timely way to a client’s request. The service could be down because of a failure or for maintenance. Or the service might be overloaded and responding extremely slowly to requests. Because the client is blocked waiting for a response, the danger is that the failure could cascade to the client’s clients and so on and cause an outage.

模式:断路器

在连续失败次数超过 指定的阈值。请参阅 http://microservices.io/patterns/reliability/circuit-breaker.html

An RPI proxy that immediately rejects invocations for a timeout period after the number of consecutive failures exceeds a specified threshold. See http://microservices.io/patterns/reliability/circuit-breaker.html.

例如,考虑图 3.2 中所示的场景,其中 the 没有响应。移动客户端向 API 网关发出 REST 请求,如第 8 章所述,API 网关是 API 客户端进入应用程序的入口点。API 网关将请求代理到无响应的 .Order ServiceOrder Service

Consider, for example, the scenario shown in figure 3.2, where the Order Service is unresponsive. A mobile client makes a REST request to an API gateway, which, as discussed in chapter 8, is the entry point into the application for API clients. The API gateway proxies the request to the unresponsive Order Service.

图 3.2.API 网关必须保护自身免受无响应服务(如 .Order Service

一个天真的 implementation 会无限期地阻塞,等待响应。这不仅会导致糟糕的用户体验,而且在许多应用程序中 它将消耗宝贵的资源,例如线程。最终,API 网关将耗尽资源并变得无法 处理请求。整个 API 将不可用。OrderServiceProxy

A naive implementation of the OrderServiceProxy would block indefinitely, waiting for a response. Not only would that result in a poor user experience, but in many applications it would consume a precious resource, such as a thread. Eventually the API gateway would run out of resources and become unable to handle requests. The entire API would be unavailable.

您必须设计服务以防止部分故障在整个应用程序中级联。那里 是解决方案的两个部分:

It’s essential that you design your services to prevent partial failures from cascading throughout the application. There are two parts to the solution:

  • 您必须使用设计 RPI 代理(如 )来处理无响应的远程服务。OrderServiceProxy
  • You must use design RPI proxies, such as OrderServiceProxy, to handle unresponsive remote services.
  • 您需要决定如何从失败的远程服务中恢复。
  • You need to decide how to recover from a failed remote service.

首先,我们将了解如何编写健壮的 RPI 代理。

First we’ll look at how to write robust RPI proxies.

开发健壮的 RPI 代理

每当一个服务同步调用另一个服务时,它都应该使用 Netflix 描述的方法保护自己 (http://techblog.netflix.com/2012/02/fault-tolerance-in-high-volume.html)。此方法包括以下机制的组合:

Whenever one service synchronously invokes another service, it should protect itself using the approach described by Netflix (http://techblog.netflix.com/2012/02/fault-tolerance-in-high-volume.html). This approach consists of a combination of the following mechanisms:

  • Network timeouts(网络超时)永远不要无限期阻塞,并在等待响应时始终使用超时。使用 timeouts 可确保资源永远不会 无限期地捆绑。
  • Network timeoutsNever block indefinitely and always use timeouts when waiting for a response. Using timeouts ensures that resources are never tied up indefinitely.
  • 限制从客户端到服务的未完成请求的数量对客户端可以向特定服务发出的未完成请求数设置上限。如果限制具有 ,则发出其他请求可能毫无意义,并且这些尝试应该立即失败。
  • Limiting the number of outstanding requests from a client to a serviceImpose an upper bound on the number of outstanding requests that a client can make to a particular service. If the limit has been reached, it’s probably pointless to make additional requests, and those attempts should fail immediately.
  • 断路器模式 - 跟踪成功和失败请求的数量,如果错误率超过某个阈值,则触发断路器 ,以便进一步的尝试立即失败。大量请求失败表明服务不可用,并且 发送更多请求是没有意义的。超时期限后,客户端应重试,如果成功,请关闭 断路器。
  • Circuit breaker patternTrack the number of successful and failed requests, and if the error rate exceeds some threshold, trip the circuit breaker so that further attempts fail immediately. A large number of requests failing suggests that the service is unavailable and that sending more requests is pointless. After a timeout period, the client should try again, and, if successful, close the circuit breaker.

Netflix Hystrix (https://github.com/Netflix/Hystrix) 是一个开源库,用于实现这些模式和其他模式。如果您使用的是 JVM,则绝对应该考虑 在实现 RPI 代理时使用 Hystrix。如果您在非 JVM 环境中运行,则应使用等效的 图书馆。例如,Polly 库在 .NET 社区 (https://github.com/App-vNext/Polly) 中很受欢迎。

Netflix Hystrix (https://github.com/Netflix/Hystrix) is an open source library that implements these and other patterns. If you’re using the JVM, you should definitely consider using Hystrix when implementing RPI proxies. And if you’re running in a non-JVM environment, you should use an equivalent library. For example, the Polly library is popular in the .NET community (https://github.com/App-vNext/Polly).

从不可用的服务中恢复

使用 Hystrix 等库只是解决方案的一部分。您还必须根据具体情况决定您的服务如何 应该从无响应的远程服务中恢复。一种选择是让服务简单地将错误返回给其客户端。为 例如,此方法对于图 3.2 中所示的场景很有意义,其中创建 an 的请求失败。唯一的选项是 API 网关将错误返回到移动客户端。Order

Using a library such as Hystrix is only part of the solution. You must also decide on a case-by-case basis how your services should recover from an unresponsive remote service. One option is for a service to simply return an error to its client. For example, this approach makes sense for the scenario shown in figure 3.2, where the request to create an Order fails. The only option is for the API gateway to return an error to the mobile client.

在其他情况下,返回回退值(例如默认值或缓存的响应)可能是有意义的。例如,第 7 章介绍了 API 网关如何使用 API 组合模式实现查询操作。如图 3.3 所示,它的端点实现调用了多个服务,包括 、 和 ,并组合了结果。findOrder()GET /orders/{orderId}Order ServiceKitchen ServiceDelivery Service

In other scenarios, returning a fallback value, such as either a default value or a cached response, may make sense. For example, chapter 7 describes how the API gateway could implement the findOrder() query operation by using the API composition pattern. As figure 3.3 shows, its implementation of the GET /orders/{orderId} endpoint invokes several services, including the Order Service, Kitchen Service, and Delivery Service, and combines the results.

图 3.3.API 网关使用 API 组合实现终端节点。它调用多个服务,聚合它们的响应,并将响应发送到移动设备 应用程序。实现终端节点的代码必须具有处理它调用的每个服务的故障的策略。GET /orders/{orderId}

每个服务的数据对客户端来说可能并不同等重要。来自 的数据是必不可少的。如果此服务不可用,API 网关应返回其数据的缓存版本或错误。 来自其他服务的数据不太重要。例如,客户端甚至可以向用户显示有用的信息 如果 Delivery status(送达状态)为 Unavailable(不可用)。如果不可用,API 网关应返回其数据的缓存版本,或在响应中省略它。Order ServiceDelivery Service

It’s likely that each service’s data isn’t equally important to the client. The data from the Order Service is essential. If this service is unavailable, the API gateway should return either a cached version of its data or an error. The data from the other services is less critical. A client can, for example, display useful information to the user even if the delivery status was unavailable. If the Delivery Service is unavailable, the API gateway should return either a cached version of its data or omit it from the response.

设计服务以处理部分故障至关重要,但这并不是您需要解决的唯一问题 使用 RPI 时。另一个问题是,为了让一个服务使用 RPI 调用另一个服务,它需要知道 服务实例的网络位置。从表面上看,这听起来很简单,但实际上这是一个具有挑战性的问题。你 必须使用服务发现机制。让我们看看它是如何运作的。

It’s essential that you design your services to handle partial failure, but that’s not the only problem you need to solve when using RPI. Another problem is that in order for one service to invoke another service using RPI, it needs to know the network location of a service instance. On the surface this sounds simple, but in practice it’s a challenging problem. You must use a service discovery mechanism. Let’s look at how that works.

3.2.4. 使用服务发现

3.2.4. Using service discovery

假设您正在编写一些代码,该代码调用具有 REST API 的服务。为了发出请求,您的代码需要知道 服务实例的网络位置(IP 地址和端口)。在物理硬件上运行的传统应用程序中, Service 实例的网络位置通常是静态的。例如,您的代码可以从 偶尔更新的配置文件。但在基于云的现代微服务应用程序中,通常不是 就这么简单。如图 3.4 所示,现代应用程序更加动态。

Say you’re writing some code that invokes a service that has a REST API. In order to make a request, your code needs to know the network location (IP address and port) of a service instance. In a traditional application running on physical hardware, the network locations of service instances are usually static. For example, your code could read the network locations from a configuration file that’s occasionally updated. But in a modern, cloud-based microservices application, it’s usually not that simple. As is shown in figure 3.4, a modern application is much more dynamic.

图 3.4.服务实例具有动态分配的 IP 地址。

服务实例具有动态分配的网络位置。此外,服务实例集会动态更改 因为自动扩展、失败和升级。因此,您的客户端代码必须使用服务发现。

Service instances have dynamically assigned network locations. Moreover, the set of service instances changes dynamically because of autoscaling, failures, and upgrades. Consequently, your client code must use a service discovery.

服务发现概述

正如您刚才所看到的,您不能使用服务的 IP 地址静态配置客户端。相反,应用程序 必须使用动态服务发现机制。服务发现在概念上非常简单:它的关键组件是服务 registry,它是应用程序服务实例的网络位置的数据库。

As you’ve just seen, you can’t statically configure a client with the IP addresses of the services. Instead, an application must use a dynamic service discovery mechanism. Service discovery is conceptually quite simple: its key component is a service registry, which is a database of the network locations of an application’s service instances.

服务发现机制会在服务实例启动和停止时更新服务注册表。当客户端调用 服务中,服务发现机制会查询 Service Registry 以获取可用服务实例的列表,以及 将请求路由到其中一个。

The service discovery mechanism updates the service registry when service instances start and stop. When a client invokes a service, the service discovery mechanism queries the service registry to obtain a list of available service instances and routes the request to one of them.

实现服务发现有两种主要方法:

There are two main ways to implement service discovery:

  • 服务及其客户端直接与服务注册表交互。
  • The services and their clients interact directly with the service registry.
  • 部署基础结构处理服务发现。(我在第 12 章中会详细讨论这一点。
  • The deployment infrastructure handles service discovery. (I talk more about that in chapter 12.)

让我们看看每个选项。

Let’s look at each option.

应用应用程序级服务发现模式

实现服务发现的一种方法是让应用程序的服务及其客户端与服务注册表进行交互。图 3.5 显示了其工作原理。服务实例向服务注册表注册其网络位置。服务客户端调用 一个 Service,首先查询 Service Registry 以获取 Service 实例列表。然后,它将请求发送到 那些实例。

One way to implement service discovery is for the application’s services and their clients to interact with the service registry. Figure 3.5 shows how this works. A service instance registers its network location with the service registry. A service client invokes a service by first querying the service registry to obtain a list of service instances. It then sends a request to one of those instances.

图 3.5.服务注册表会跟踪服务实例。客户端查询服务注册中心以查找 可用的服务实例。

这种服务发现方法是两种模式的组合。第一种模式是 Self registration 模式。一个 Service 实例调用 Service Registry 的注册 API 来注册其网络位置。它还可能提供 Health 检查 URL,第 11 章中有更详细的描述。运行状况检查 URL 是一个 API 端点,服务注册表会定期调用该端点,以验证服务实例是否正常运行,并且 可用于处理请求。服务注册中心可能要求服务实例定期调用 命令以防止其注册过期。

This approach to service discovery is a combination of two patterns. The first pattern is the Self registration pattern. A service instance invokes the service registry’s registration API to register its network location. It may also supply a health check URL, described in more detail in chapter 11. The health check URL is an API endpoint that the service registry invokes periodically to verify that the service instance is healthy and available to handle requests. A service registry may require a service instance to periodically invoke a “heartbeat” API in order to prevent its registration from expiring.

模式:自注册

服务实例将自身注册到服务注册表中。请参阅 http://microservices.io/patterns/self-registration.html

A service instance registers itself with the service registry. See http://microservices.io/patterns/self-registration.html.

第二种模式是 Client-side discovery 模式。当服务客户端想要调用服务时,它会查询服务 registry 获取服务实例的列表。为了提高性能,客户端可能会缓存服务实例。 然后,服务客户端使用负载均衡算法(如循环或随机)来选择服务实例。然后,它发出一个请求 添加到 Select Service 实例。

The second pattern is the Client-side discovery pattern. When a service client wants to invoke a service, it queries the service registry to obtain a list of the service’s instances. To improve performance, a client might cache the service instances. The service client then uses a load-balancing algorithm, such as a round-robin or random, to select a service instance. It then makes a request to a select service instance.

模式:客户端发现

服务客户端从服务注册表中检索可用服务实例的列表,并在它们之间进行负载均衡。 请参阅 http://microservices.io/patterns/client-side-discovery.html

A service client retrieves the list of available service instances from the service registry and load balances across them. See http://microservices.io/patterns/client-side-discovery.html.

应用程序级服务发现已被 Netflix 和 Pivotal 推广。Netflix 开发并开源了几个 组件:Eureka,一个高度可用的服务注册表,Eureka Java 客户端和 Ribbon,一个复杂的 HTTP 客户端,它 支持 Eureka 客户端。Pivotal 开发了 Spring Cloud,这是一个基于 Spring 的框架,使其非常易于使用 Netflix 组件。基于 Spring Cloud 的服务会自动注册到 Eureka,并且基于 Spring Cloud 的客户端会自动注册 使用 Eureka 进行服务发现。

Application-level service discovery has been popularized by Netflix and Pivotal. Netflix developed and open sourced several components: Eureka, a highly available service registry, the Eureka Java client, and Ribbon, a sophisticated HTTP client that supports the Eureka client. Pivotal developed Spring Cloud, a Spring-based framework that makes it remarkably easy to use the Netflix components. Spring Cloud-based services automatically register with Eureka, and Spring Cloud-based clients automatically use Eureka for service discovery.

应用程序级服务发现的一个好处是,它可以处理将服务部署在多个 部署平台。例如,假设您只在 Kubernetes 上部署了一些服务,如第 12 章所述,其余服务都在遗留环境中运行。例如,使用 Eureka 的应用程序级服务发现可以跨 两种环境,而基于 Kubernetes 的服务发现仅在 Kubernetes 中有效。

One benefit of application-level service discovery is that it handles the scenario when services are deployed on multiple deployment platforms. Imagine, for example, you’ve deployed only some of services on Kubernetes, discussed in chapter 12, and the rest is running in a legacy environment. Application-level service discovery using Eureka, for example, works across both environments, whereas Kubernetes-based service discovery only works within Kubernetes.

应用程序级服务发现的一个缺点是,您需要为每种语言提供一个服务发现库,并且可能需要 框架 - 你使用的。Spring Cloud 只帮助 Spring 开发人员。如果您使用的是其他 Java 框架或非 JVM 语言,例如 NodeJS 或 GoLang,您必须找到一些其他服务发现框架。应用程序级的另一个缺点 服务发现是指您负责设置和管理 Service Registry,这会分散注意力。如 因此,通常最好使用部署基础结构提供的服务发现机制。

One drawback of application-level service discovery is that you need a service discovery library for every language—and possibly framework—that you use. Spring Cloud only helps Spring developers. If you’re using some other Java framework or a non-JVM language such as NodeJS or GoLang, you must find some other service discovery framework. Another drawback of application-level service discovery is that you’re responsible for setting up and managing the service registry, which is a distraction. As a result, it’s usually better to use a service discovery mechanism that’s provided by the deployment infrastructure.

应用平台提供的服务发现模式

第 12 章的后面部分,您将了解到许多现代部署平台(如 Docker 和 Kubernetes)都有内置的服务注册表和服务 发现机制。部署平台为每个服务提供一个 DNS 名称、一个虚拟 IP (VIP) 地址和一个 DNS 名称,该名称 解析为 VIP 地址。服务客户端向 DNS 名称/VIP 发出请求,部署平台会自动 将请求路由到其中一个可用的服务实例。因此,服务注册、服务发现和请求 路由完全由 Deployment Platform 处理。图 3.6 显示了其工作原理。

Later in chapter 12 you’ll learn that many modern deployment platforms such as Docker and Kubernetes have a built-in service registry and service discovery mechanism. The deployment platform gives each service a DNS name, a virtual IP (VIP) address, and a DNS name that resolves to the VIP address. A service client makes a request to the DNS name/VIP, and the deployment platform automatically routes the request to one of the available service instances. As a result, service registration, service discovery, and request routing are entirely handled by the deployment platform. Figure 3.6 shows how this works.

图 3.6.该平台负责服务注册、发现和请求路由。服务实例注册到 注册商的 Service Registry。每个服务都有一个网络位置、一个 DNS 名称/虚拟 IP 地址。客户端向服务的网络位置发出请求。 路由器查询服务注册表,并在可用服务实例之间对请求进行负载均衡。

部署平台包括一个服务注册表,用于跟踪已部署服务的 IP 地址。在此示例中, 客户端使用 DNS 名称 访问,该名称解析为虚拟 IP 地址 。部署平台会自动在 .Order Serviceorder-service10.1.3.4Order Service

The deployment platform includes a service registry that tracks the IP addresses of the deployed services. In this example, a client accesses the Order Service using the DNS name order-service, which resolves to the virtual IP address 10.1.3.4. The deployment platform automatically load balances requests across the three instances of the Order Service.

此方法是两种模式的组合:

This approach is a combination of two patterns:

  • 第三方注册模式服务不是向服务注册表注册自己,而是由称为 registrar 的第三方(通常是部署平台的一部分)来处理注册。
  • 3rd party registration patternInstead of a service registering itself with the service registry, a third party called the registrar, which is typically part of the deployment platform, handles the registration.
  • 服务器端发现模式客户端不是查询服务注册表,而是向 DNS 名称发出请求,该名称解析为请求路由器 查询 Service Registry 和负载均衡请求。
  • Server-side discovery patternInstead of a client querying the service registry, it makes a request to a DNS name, which resolves to a request router that queries the service registry and load balances requests.

图案:第三方注册

服务实例由第三方自动注册到服务注册表中。请参阅 http://microservices.io/patterns/3rd-party-registration.html

Service instances are automatically registered with the service registry by a third party. See http://microservices.io/patterns/3rd-party-registration.html.

模式:服务器端发现

客户端向负责服务发现的路由器发出请求。请参阅 http://microservices.io/patterns/server-side-discovery.html

A client makes a request to a router, which is responsible for service discovery. See http://microservices.io/patterns/server-side-discovery.html.

平台提供的服务发现的主要好处是,服务发现的所有方面都完全由 部署平台。服务和客户端都不包含任何服务发现代码。因此,服务发现 机制对所有服务和客户端都可用,无论它们使用哪种语言或框架编写。

The key benefit of platform-provided service discovery is that all aspects of service discovery are entirely handled by the deployment platform. Neither the services nor the clients contain any service discovery code. Consequently, the service discovery mechanism is readily available to all services and clients regardless of which language or framework they’re written in.

平台提供的服务发现的一个缺点是它仅支持发现已部署的服务 使用平台。例如,如前所述,在描述应用程序级发现时,基于 Kubernetes 的发现 仅适用于在 Kubernetes 上运行的服务。尽管有此限制,我还是建议使用平台提供的服务发现 尽可能。

One drawback of platform-provided service discovery is that it only supports the discovery of services that have been deployed using the platform. For example, as mentioned earlier when describing application-level discovery, Kubernetes-based discovery only works for services running on Kubernetes. Despite this limitation, I recommend using platform-provided service discovery whenever possible.

现在我们已经了解了使用 REST 或 gRPC 的同步 IPC,让我们看看替代方案:异步、基于消息 通信。

Now that we’ve looked at synchronous IPC using REST or gRPC, let’s take a look at the alternative: asynchronous, message-based communication.

3.3. 使用 Asynchronous messaging 模式进行通信

3.3. Communicating using the Asynchronous messaging pattern

使用消息收发时,服务通过异步交换消息进行通信。基于消息传送的应用程序通常 使用消息代理,该代理充当服务之间的中介,但另一种选择是使用无代理架构,其中 服务之间直接通信。服务客户端通过向服务发送消息来向服务发出请求。如果 service 实例需要回复,它将通过向客户端发送一条单独的消息来实现。因为通信 是异步的,则客户端不会阻止等待回复。相反,编写 client 时假设回复不会 立即收到。

When using messaging, services communicate by asynchronously exchanging messages. A messaging-based application typically uses a message broker, which acts as an intermediary between the services, although another option is to use a brokerless architecture, where the services communicate directly with each other. A service client makes a request to a service by sending it a message. If the service instance is expected to reply, it will do so by sending a separate message back to the client. Because the communication is asynchronous, the client doesn’t block waiting for a reply. Instead, the client is written assuming that the reply won’t be received immediately.

模式:消息传递

客户端使用异步消息传递调用服务。请参阅 http://microservices.io/patterns/communication-style/messaging.html

A client invokes a service using asynchronous messaging. See http://microservices.io/patterns/communication-style/messaging.html.

在本节中,我将对消息传递进行概述。我将介绍如何描述独立于消息传递的消息传递架构 科技。接下来,我将比较和对比无代理架构和基于代理的架构,并描述选择消息代理的标准。然后,我讨论了几个 重要主题,包括在保留消息排序的同时扩展使用者、检测和丢弃重复消息、 以及作为数据库事务的一部分发送和接收消息。让我们首先看看消息传递的工作原理。

I start this section with an overview of messaging. I show how to describe a messaging architecture independently of messaging technology. Next I compare and contrast brokerless and broker-based architectures and describe the criteria for selecting a message broker. I then discuss several important topics, including scaling consumers while preserving message ordering, detecting and discarding duplicate messages, and sending and receiving messages as part of a database transaction. Let’s begin by looking at how messaging works.

3.3.1. 消息传递概述

3.3.1. Overview of messaging

Gregor Hohpe 和 Bobby Woolf 在 Enterprise Integration Patterns (Addison-Wesley Professional, 2003) 一书中定义了一个有用的消息传递模型。在此模型中,消息通过消息通道进行交换。 发送方(应用程序或服务)将消息写入通道,接收方(应用程序或服务)读取消息 从频道。让我们看看消息,然后再看看频道。

A useful model of messaging is defined in the book Enterprise Integration Patterns (Addison-Wesley Professional, 2003) by Gregor Hohpe and Bobby Woolf. In this model, messages are exchanged over message channels. A sender (an application or service) writes a message to a channel, and a receiver (an application or service) reads messages from a channel. Let’s look at messages and then look at channels.

关于消息

消息由标头和消息正文 (www.enterpriseintegrationpatterns.com/Message.html) 组成。标头是名称-值对的集合,即描述所发送数据的元数据。除了提供的名称-值对 由消息的发送者提供的消息头包含名称-值对(例如,由发送者或消息传递基础结构生成的唯一消息 ID)和可选的返回地址(指定应将回复写入的消息通道)。消息正文是正在发送的数据,格式为文本或二进制格式。

A message consists of a header and a message body (www.enterpriseintegrationpatterns.com/Message.html). The header is a collection of name-value pairs, metadata that describes the data being sent. In addition to name-value pairs provided by the message’s sender, the message header contains name-value pairs, such as a unique message id generated by either the sender or the messaging infrastructure, and an optional return address, which specifies the message channel that a reply should be written to. The message body is the data being sent, in either text or binary format.

有几种不同类型的消息:

There are several different kinds of messages:

  • 文件 - 仅包含数据的通用消息。接收者决定如何解释它。对命令的回复就是一个示例 文档消息。
  • DocumentA generic message that contains only data. The receiver decides how to interpret it. The reply to a command is an example of a document message.
  • 命令 - 相当于 RPC 请求的消息。它指定要调用的操作及其参数。
  • CommandA message that’s the equivalent of an RPC request. It specifies the operation to invoke and its parameters.
  • 事件指示发件人中发生了值得注意的事情的消息。事件通常是域事件,它表示 域对象(如 、 或 )的状态更改。OrderCustomer
  • EventA message indicating that something notable has occurred in the sender. An event is often a domain event, which represents a state change of a domain object such as an Order, or a Customer.

本书中描述的微服务架构方法广泛使用命令和事件。

The approach to the microservice architecture described in this book uses commands and events extensively.

现在让我们看看 channels,即服务通信的机制。

Let’s now look at channels, the mechanism by which services communicate.

关于消息通道

如图 3.7 所示,消息是通过通道 (www.enterpriseintegrationpatterns.com/MessageChannel.html) 交换的。发送方中的业务逻辑调用发送端口接口,该接口封装了底层通信机制。发送端口消息发送方适配器类实现,该类通过消息通道将消息发送到接收方。消息通道是消息传递基础结构的抽象。调用接收方中的消息处理程序适配器类来处理消息。它调用由消费者的业务逻辑实现的接收端口接口。任意数量的发送方都可以向通道发送消息。同样,任意数量的接收者都可以接收消息 从频道。

As figure 3.7 shows, messages are exchanged over channels (www.enterpriseintegrationpatterns.com/MessageChannel.html). The business logic in the sender invokes a sending port interface, which encapsulates the underlying communication mechanism. The sending port is implemented by a message sender adapter class, which sends a message to a receiver via a message channel. A message channel is an abstraction of the messaging infrastructure. A message handler adapter class in the receiver is invoked to handle the message. It invokes a receiving port interface implemented by the consumer’s business logic. Any number of senders can send messages to a channel. Similarly, any number of receivers can receive messages from a channel.

图 3.7.发送方中的业务逻辑调用发送端口接口,该接口由消息发送方适配器实现。信息 sender 通过消息通道向接收方发送消息。消息通道是消息传递基础结构的抽象。 调用接收方中的消息处理程序适配器来处理消息。它调用实现的接收端口接口 通过接收方的业务逻辑。

有两种通道:点对点 (www.enterpriseintegrationpatterns.com/PointToPointChannel.html) 和发布-订阅 (www.enterpriseintegrationpatterns.com/PublishSubscribeChannel.html):

There are two kinds of channels: point-to-point (www.enterpriseintegrationpatterns.com/PointToPointChannel.html) and publish-subscribe (www.enterpriseintegrationpatterns.com/PublishSubscribeChannel.html):

  • 点对点通道将消息传送给正在从该通道读取数据的使用者之一。服务使用点对点 channel 进行一对一交互样式的调用。例如,命令消息通常通过点对点发送 渠道。
  • A point-to-point channel delivers a message to exactly one of the consumers that is reading from the channel. Services use point-to-point channels for the one-to-one interaction styles described earlier. For example, a command message is often sent over a point-to-point channel.
  • 发布-订阅通道将每条消息传送给所有附加的使用者。服务使用一对多的发布-订阅通道 交互样式。例如,事件消息通常通过 publish-subscribe 通道发送。
  • A publish-subscribe channel delivers each message to all of the attached consumers. Services use publish-subscribe channels for the one-to-many interaction styles described earlier. For example, an event message is usually sent over a publish-subscribe channel.

3.3.2. 使用消息传递实现交互样式

3.3.2. Implementing the interaction styles using messaging

消息传递的一个有价值的功能是它足够灵活,可以支持 3.1.1 节中描述的所有交互样式。某些交互样式是通过消息传递直接实现的。其他 S 必须在消息传递之上实现。

One of the valuable features of messaging is that it’s flexible enough to support all the interaction styles described in section 3.1.1. Some interaction styles are directly implemented by messaging. Others must be implemented on top of messaging.

让我们看看如何实现每种交互风格,从 request/response 和 asynchronous request/response 开始。

Let’s look at how to implement each interaction style, starting with request/response and asynchronous request/response.

实现请求/响应和异步请求/响应

当客户端和服务使用请求/响应或异步请求/响应进行交互时,客户端会发送一个请求 服务发回回复。这两种交互方式之间的区别在于,对于请求/响应,客户端希望服务立即响应。 而对于异步请求 / 响应,则没有这样的期望。消息传递本质上是异步的,因此仅提供 异步请求/响应。但是,客户端可以阻止,直到收到回复。

When a client and service interact using either request/response or asynchronous request/response, the client sends a request and the service sends back a reply. The difference between the two interaction styles is that with request/response the client expects the service to respond immediately, whereas with asynchronous request/response there is no such expectation. Messaging is inherently asynchronous, so only provides asynchronous request/response. But a client could block until a reply is received.

客户端和服务通过交换一对消息来实现异步请求/响应样式的交互。如图 3.8 所示,客户端向点对点发送一条命令消息,该消息指定要执行的操作和参数 服务拥有的消息传送通道。该服务处理请求并发送回复消息,其中包含结果。 发送到客户端拥有的点对点通道。

The client and service implement the asynchronous request/response style interaction by exchanging a pair of messages. As figure 3.8 shows, the client sends a command message, which specifies the operation to perform, and parameters, to a point-to-point messaging channel owned by a service. The service processes the requests and sends a reply message, which contains the outcome, to a point-to-point channel owned by the client.

图 3.8.通过在请求消息中包含回复通道和消息标识符来实现异步请求/响应。这 接收方处理消息并将回复发送到指定的回复通道。

客户端必须告诉服务将回复消息发送到何处,并且必须将回复消息与请求匹配。幸运的是,解决了 这两个问题并不难。客户端发送具有回复通道标头的命令消息。服务器将回复消息写入回复通道,其中包含与消息标识符具有相同值的相关 ID。客户端使用相关 ID 将回复消息与请求进行匹配。

The client must tell the service where to send a reply message and must match reply messages to requests. Fortunately, solving these two problems isn’t that difficult. The client sends a command message that has a reply channel header. The server writes the reply message, which contains a correlation id that has the same value as message identifier, to the reply channel. The client uses the correlation id to match the reply message with the request.

由于客户端和服务使用消息传递进行通信,因此交互本质上是异步的。理论上,消息传递 客户端可以阻塞直到收到回复,但实际上客户端将异步处理回复。并且 回复通常由客户端的任何一个实例处理。

Because the client and service communicate using messaging, the interaction is inherently asynchronous. In theory, a messaging client could block until it receives a reply, but in practice the client will process replies asynchronously. What’s more, replies are typically processed by any one of the client’s instances.

实现单向通知

使用异步消息传递实现单向通知非常简单。客户端发送消息,通常 命令消息,发送到服务拥有的点对点通道。该服务订阅通道并处理 消息。它不会发回回复。

Implementing one-way notifications is straightforward using asynchronous messaging. The client sends a message, typically a command message, to a point-to-point channel owned by the service. The service subscribes to the channel and processes the message. It doesn’t send back a reply.

实施发布/订阅

消息传递具有对发布/订阅交互样式的内置支持。客户端向 publish-subscribe 发布消息 channel 中。如第 4 章和第 5 章所述,服务使用发布/订阅来发布域事件,这些事件表示对域对象的更改。发布 Domain Events 拥有一个 Publish-Subscribe 通道,其名称派生自 Domain 类。例如,将事件发布到通道,将事件发布到通道。对特定域对象的事件感兴趣的服务只需订阅相应的通道。Order ServiceOrderOrderDelivery ServiceDeliveryDelivery

Messaging has built-in support for the publish/subscribe style of interaction. A client publishes a message to a publish-subscribe channel that is read by multiple consumers. As described in chapters 4 and 5, services use publish/subscribe to publish domain events, which represent changes to domain objects. The service that publishes the domain events owns a publish-subscribe channel, whose name is derived from the domain class. For example, the Order Service publishes Order events to an Order channel, and the Delivery Service publishes Delivery events to a Delivery channel. A service that’s interested in a particular domain object’s events only has to subscribe to the appropriate channel.

实现发布/异步响应

发布/异步响应交互样式是一种通过组合元素实现的更高级别的交互样式 发布/订阅和请求/响应。客户端将指定回复通道头的消息发布到发布-订阅通道。使用者将包含相关 ID 的回复消息写入回复通道。客户端通过使用相关 ID 将回复消息与请求匹配来收集响应。

The publish/async responses interaction style is a higher-level style of interaction that’s implemented by combining elements of publish/subscribe and request/response. A client publishes a message that specifies a reply channel header to a publish-subscribe channel. A consumer writes a reply message containing a correlation id to the reply channel. The client gathers the responses by using the correlation id to match the reply messages with the request.

应用程序中具有异步 API 的每个服务都将使用其中的一种或多种实现技术。服务 具有用于调用操作的异步 API 将具有用于请求的消息通道。同样,发布 events 会将它们发布到 Event 消息频道。

Each service in your application that has an asynchronous API will use one or more of these implementation techniques. A service that has an asynchronous API for invoking operations will have a message channel for requests. Similarly, a service that publishes events will publish them to an event message channel.

第 3.1.2 节所述,为服务编写 API 规范非常重要。让我们看看如何为异步 API 执行此操作。

As described in section 3.1.2, it’s important to write an API specification for a service. Let’s look at how to do that for an asynchronous API.

3.3.3. 为基于消息传递的服务 API 创建 API 规范

3.3.3. Creating an API specification for a messaging-based service API

如图 3.9 所示,服务的异步 API 规范必须指定消息通道的名称、通过每个通道交换的消息类型及其格式。 您还必须使用 JSON、XML 或 Protobuf 等标准来描述消息的格式。但与 REST 和 Open API 中,没有一个广泛采用的标准来记录通道和消息类型。相反,您需要编写 非正式文件。

The specification for a service’s asynchronous API must, as figure 3.9 shows, specify the names of the message channels, the message types that are exchanged over each channel, and their formats. You must also describe the format of the messages using a standard such as JSON, XML, or Protobuf. But unlike with REST and Open API, there isn’t a widely adopted standard for documenting the channels and the message types. Instead, you need to write an informal document.

图 3.9.服务的异步 API 由消息通道以及 command、reply 和 event 消息类型组成。

服务的异步 API 由客户端调用的操作和服务发布的事件组成。他们有记录 以不同的方式。让我们从操作开始,逐一了解一下。

A service’s asynchronous API consists of operations, invoked by clients, and events, published by the services. They’re documented in different ways. Let’s take a look at each one, starting with operations.

记录异步操作

可以使用以下两种不同的交互方式之一来调用服务的操作:

A service’s operations can be invoked using one of two different interaction styles:

  • 请求/异步响应风格的 API这包括服务的命令消息通道、服务的命令消息类型的类型和格式 接受,以及服务发送的回复消息的类型和格式。
  • Request/async response-style APIThis consists of the service’s command message channel, the types and formats of the command message types that the service accepts, and the types and formats of the reply messages sent by the service.
  • 单向通知式 API这包括服务的命令消息通道以及服务的命令消息类型的类型和格式 接受。
  • One-way notification-style APIThis consists of the service’s command message channel and the types and format of the command message types that the service accepts.

服务可以对异步请求/响应和单向通知使用相同的请求通道。

A service may use the same request channel for both asynchronous request/response and one-way notification.

记录已发布的事件

服务还可以使用发布/订阅交互样式发布事件。这种 API 样式的规范包括 事件通道以及服务发布到通道的事件消息的类型和格式。

A service can also publish events using a publish/subscribe interaction style. The specification of this style of API consists of the event channel and the types and formats of the event messages that are published by the service to the channel.

消息传递的消息和通道模型是一个很好的抽象,也是设计服务的异步 API 的好方法。 但是,为了实现服务,您需要选择一种消息传递技术,并确定如何使用 它的能力。让我们来看看其中涉及的内容。

The messages and channels model of messaging is a great abstraction and a good way to design a service’s asynchronous API. But in order to implement a service you need to choose a messaging technology and determine how to implement your design using its capabilities. Let’s take a look at what’s involved.

3.3.4. 使用消息代理

3.3.4. Using a message broker

基于消息收发的应用程序通常使用消息代理,这是一种用于服务通信的基础设施服务。但是,基于代理的架构并不是唯一的消息传递 建筑。您还可以使用基于无代理的消息传送体系结构,在该体系结构中,服务彼此通信 径直。图 3.10 所示的两种方法具有不同的权衡,但通常基于 broker 的架构是更好的方法。

A messaging-based application typically uses a message broker, an infrastructure service through which the service communicates. But a broker-based architecture isn’t the only messaging architecture. You can also use a brokerless-based messaging architecture, in which the services communicate with one another directly. The two approaches, shown in figure 3.10, have different trade-offs, but usually a broker-based architecture is a better approach.

图 3.10.无代理架构中的服务直接通信,而基于代理的架构中的服务进行通信 通过消息代理。

本书重点介绍基于代理的架构,但值得快速浏览一下无代理架构,因为 在某些情况下,您可能会发现它很有用。

This book focuses on broker-based architecture, but it’s worthwhile to take a quick look at the brokerless architecture, because there may be scenarios where you find it useful.

无代理消息传递

在无代理体系结构中,服务可以直接交换消息。ZeroMQ (http://zeromq.org) 是一种流行的无代理消息传递技术。它既是不同语言的规范,也是一组库。 它支持多种传输方式,包括 TCP、UNIX 样式的域套接字和多播。

In a brokerless architecture, services can exchange messages directly. ZeroMQ (http://zeromq.org) is a popular brokerless messaging technology. It’s both a specification and a set of libraries for different languages. It supports a variety of transports, including TCP, UNIX-style domain sockets, and multicast.

无代理体系结构具有一些好处:

The brokerless architecture has some benefits:

  • 允许更轻的网络流量和更好的延迟,因为消息直接从发送方发送到接收方,而不是 必须从发送方转到消息代理,然后再从那里转到接收方
  • Allows lighter network traffic and better latency, because messages go directly from the sender to the receiver, instead of having to go from the sender to the message broker and from there to the receiver
  • 消除了消息代理成为性能瓶颈或单点故障的可能性
  • Eliminates the possibility of the message broker being a performance bottleneck or a single point of failure
  • 功能降低了操作复杂性,因为无需设置和维护消息代理
  • Features less operational complexity, because there is no message broker to set up and maintain

尽管这些好处看起来很吸引人,但无代理消息传递也有明显的缺点:

As appealing as these benefits may seem, brokerless messaging has significant drawbacks:

  • 服务需要了解彼此的位置,因此必须使用前面描述的发现机制之一 在 3.2.4 节中。
  • Services need to know about each other’s locations and must therefore use one of the discovery mechanisms describer earlier in section 3.2.4.
  • 它降低了可用性,因为消息的发送方和接收方在消息传输时都必须可用 交换。
  • It offers reduced availability, because both the sender and receiver of a message must be available while the message is being exchanged.
  • 实施保证交付等机制更具挑战性。
  • Implementing mechanisms, such as guaranteed delivery, is more challenging.

事实上,其中一些缺点(例如可用性降低和需要服务发现)与使用 synchronous, response/response 的

In fact, some of these drawbacks, such as reduced availability and the need for service discovery, are the same as when using synchronous, response/response.

由于这些限制,大多数企业应用程序都使用基于消息代理的体系结构。让我们看看它是如何的 工程。

Because of these limitations, most enterprise applications use a message broker-based architecture. Let’s look at how that works.

基于代理的消息收发概述

消息代理是所有消息流经的中介。发送方将消息写入消息代理,然后 消息代理将其传送给接收方。使用消息代理的一个重要好处是发送者不会 需要知道消费者的网络位置。另一个好处是,消息代理会缓冲消息,直到使用者 能够处理它们。

A message broker is an intermediary through which all messages flow. A sender writes the message to the message broker, and the message broker delivers it to the receiver. An important benefit of using a message broker is that the sender doesn’t need to know the network location of the consumer. Another benefit is that a message broker buffers messages until the consumer is able to process them.

有许多消息代理可供选择。常用的开源消息代理示例包括:

There are many message brokers to chose from. Examples of popular open source message brokers include the following:

还有基于云的消息收发服务,例如 AWS Kinesis (https://aws.amazon.com/kinesis/) 和 AWS SQS (https://aws.amazon.com/sqs/)。

There are also cloud-based messaging services, such as AWS Kinesis (https://aws.amazon.com/kinesis/) and AWS SQS (https://aws.amazon.com/sqs/).

在选择消息代理时,您需要考虑各种因素,包括:

When selecting a message broker, you have various factors to consider, including the following:

  • 支持的编程语言您可能应该选择一个支持各种编程语言的编程语言。
  • Supported programming languagesYou probably should pick one that supports a variety of programming languages.
  • 支持的消息传递标准消息代理是否支持任何标准,例如 AMQP 和 STOMP,或者它是专有的?
  • Supported messaging standardsDoes the message broker support any standards, such as AMQP and STOMP, or is it proprietary?
  • 消息排序消息代理是否保留消息的顺序?
  • Messaging orderingDoes the message broker preserve ordering of messages?
  • 交货保证经纪人提供什么样的交货保证?
  • Delivery guaranteesWhat kind of delivery guarantees does the broker make?
  • 持久性消息是否持久保存到磁盘并能够在代理崩溃后继续存在?
  • PersistenceAre messages persisted to disk and able to survive broker crashes?
  • 耐用性如果使用者重新连接到消息代理,它是否会收到在断开连接时发送的消息?
  • DurabilityIf a consumer reconnects to the message broker, will it receive the messages that were sent while it was disconnected?
  • 可扩展性消息代理的可扩展性如何?
  • ScalabilityHow scalable is the message broker?
  • Latency (延迟) - 什么是端到端延迟?
  • LatencyWhat is the end-to-end latency?
  • 竞争消费者消息代理是否支持竞争使用者?
  • Competing consumersDoes the message broker support competing consumers?

每个经纪人都会做出不同的权衡。例如,延迟非常低的 broker 可能不会保留 ordering,因此无法保证 来传递消息,并且仅将消息存储在内存中。保证传递并可靠地存储消息的消息收发代理 on disk 可能会有更高的延迟。哪种类型的消息代理最适合您的应用程序的要求。 甚至应用程序的不同部分也可能具有不同的消息传递要求。

Each broker makes different trade-offs. For example, a very low-latency broker might not preserve ordering, make no guarantees to deliver messages, and only store messages in memory. A messaging broker that guarantees delivery and reliably stores messages on disk will probably have higher latency. Which kind of message broker is the best fit depends on your application’s requirements. It’s even possible that different parts of your application will have different messaging requirements.

但是,消息传递排序和可伸缩性可能是必不可少的。现在让我们看看如何实现消息通道 使用消息代理。

It’s likely, though, that messaging ordering and scalability are essential. Let’s now look at how to implement message channels using a message broker.

使用消息代理实现消息通道

每个消息代理都以不同的方式实现消息通道概念。如表 3.2 所示,ActiveMQ 等 JMS 消息代理具有队列和主题。基于 AMQP 的消息代理(如 RabbitMQ)具有交换 和队列。Apache Kafka 有主题,AWS Kinesis 有流,AWS SQS 有队列。更重要的是,一些消息代理提供 比本章中描述的消息和通道抽象更灵活的消息传递。

Each message broker implements the message channel concept in a different way. As table 3.2 shows, JMS message brokers such as ActiveMQ have queues and topics. AMQP-based message brokers such as RabbitMQ have exchanges and queues. Apache Kafka has topics, AWS Kinesis has streams, and AWS SQS has queues. What’s more, some message brokers offer more flexible messaging than the message and channels abstraction described in this chapter.

表 3.2.每个消息代理都以不同的方式实现消息通道概念。

消息代理

Message broker

点对点通道

Point-to-point channel

发布-订阅频道

Publish-subscribe channel

JMS 公司 队列 主题
Apache Kafka 主题 主题
基于 AMQP 的代理,例如 RabbitMQ Exchange + 队列 扇出交换和每个使用者一个队列
AWS Kinesis
AWS SQS 队列

这里描述的几乎所有消息代理都支持点对点和发布-订阅通道。唯一的例外 是 AWS SQS,它仅支持点对点通道。

Almost all the message brokers described here support both point-to-point and publish-subscribe channels. The one exception is AWS SQS, which only supports point-to-point channels.

现在让我们看看基于代理的消息收发的优缺点。

Now let’s look at the benefits and drawbacks of broker-based messaging.

基于代理的消息收发的优缺点

使用基于代理的消息收发有很多优点:

There are many advantages to using broker-based messaging:

  • 松耦合客户端只需将消息发送到相应的通道即可发出请求。客户端完全不知道该服务 实例。它不需要使用发现机制来确定服务实例的位置。
  • Loose couplingA client makes a request by simply sending a message to the appropriate channel. The client is completely unaware of the service instances. It doesn’t need to use a discovery mechanism to determine the location of a service instance.
  • 消息缓冲消息代理会缓冲消息,直到可以处理这些消息为止。使用同步请求/响应协议(如 HTTP), 客户端和服务都必须在 Exchange 期间可用。但是,使用消息传递时,消息将排队 直到使用者可以处理它们。这意味着,例如,在线商店可以接受客户的订单 即使订单配送系统运行缓慢或不可用。消息将简单地排队,直到可以处理它们。
  • Message bufferingThe message broker buffers messages until they can be processed. With a synchronous request/response protocol such as HTTP, both the client and service must be available for the duration of the exchange. With messaging, though, messages will queue up until they can be processed by the consumer. This means, for example, that an online store can accept orders from customers even when the order-fulfillment system is slow or unavailable. The messages will simply queue up until they can be processed.
  • 灵活的通信消息传递支持前面描述的所有交互样式。
  • Flexible communicationMessaging supports all the interaction styles described earlier.
  • 显式进程间通信基于 RPC 的机制尝试使调用远程服务看起来与调用本地服务相同。但是由于法律 物理学和部分失败的可能性,它们实际上是完全不同的。消息传递使这些差异非常明显,因此开发人员不会陷入一种虚假的安全感中。
  • Explicit interprocess communicationRPC-based mechanism attempts to make invoking a remote service look the same as calling a local service. But due to the laws of physics and the possibility of partial failure, they’re in fact quite different. Messaging makes these differences very explicit, so developers aren’t lulled into a false sense of security.

使用消息传递有一些缺点:

There are some downsides to using messaging:

  • 潜在的性能瓶颈Message Broker 可能会成为性能瓶颈。幸运的是,设计了许多现代消息代理 具有高度可扩展性。
  • Potential performance bottleneckThere is a risk that the message broker could be a performance bottleneck. Fortunately, many modern message brokers are designed to be highly scalable.
  • 可能的单点故障消息代理必须具有高可用性,否则,系统可靠性将受到影响。幸运的是,大多数 现代代理被设计为高度可用。
  • Potential single point of failureIt’s essential that the message broker is highly available—otherwise, system reliability will be impacted. Fortunately, most modern brokers have been designed to be highly available.
  • 额外的操作复杂性消息传送系统是另一个必须安装、配置和操作的系统组件。
  • Additional operational complexityThe messaging system is yet another system component that must be installed, configured, and operated.

让我们看看您可能遇到的一些设计问题。

Let’s look at some design issues you might face.

3.3.5. 竞争接收方和消息排序

3.3.5. Competing receivers and message ordering

一个挑战是如何在保留消息顺序的同时扩展消息接收器。具有多个 实例的实例,以便并发处理消息。此外,即使是单个服务实例也可能使用 线程并发处理多条消息。使用多个线程和服务实例并发处理消息 增加应用程序的吞吐量。但是,并发处理消息的挑战在于确保每个 消息按顺序处理一次。

One challenge is how to scale out message receivers while preserving message ordering. It’s a common requirement to have multiple instances of a service in order to process messages concurrently. Moreover, even a single service instance will probably use threads to concurrently process multiple messages. Using multiple threads and service instances to concurrently process messages increases the throughput of the application. But the challenge with processing messages concurrently is ensuring that each message is processed once and in order.

例如,假设有三个服务实例从同一个点对点通道读取数据,并且一个 sender 按顺序发布 、 和 事件消息。简单的消息传递实现可以同时将每条消息传送到不同的 接收器。由于网络问题或垃圾回收导致的延迟,消息的处理顺序可能会中断,这 将导致奇怪的行为。理论上,一个服务实例可能会在另一个服务处理消息之前处理该消息!Order CreatedOrder UpdatedOrder CancelledOrder CancelledOrder Created

For example, imagine that there are three instances of a service reading from the same point-to-point channel and that a sender publishes Order Created, Order Updated, and Order Cancelled event messages sequentially. A simplistic messaging implementation could concurrently deliver each message to a different receiver. Because of delays due to network issues or garbage collections, messages might be processed out of order, which would result in strange behavior. In theory, a service instance might process the Order Cancelled message before another service processes the Order Created message!

Apache Kafka 和 AWS Kinesis 等现代消息代理使用的一种常见解决方案是使用分片(分区)通道。图 3.11 显示了其工作原理。该解决方案分为三个部分:

A common solution, used by modern message brokers like Apache Kafka and AWS Kinesis, is to use sharded (partitioned) channels. Figure 3.11 shows how this works. There are three parts to the solution:

  1. 分片通道由两个或多个分片组成,每个分片的行为类似于通道。
  2. A sharded channel consists of two or more shards, each of which behaves like a channel.
  3. 发送方在消息的标头中指定分片键,该键通常是任意字符串或字节序列。这 Message Broker 使用分片键将消息分配给特定的分片/分区。例如,它可能会选择分片 通过计算分片键的哈希值对分片数量进行模运算。
  4. The sender specifies a shard key in the message’s header, which is typically an arbitrary string or sequence of bytes. The message broker uses a shard key to assign the message to a particular shard/partition. It might, for example, select the shard by computing the hash of the shard key modulo the number of shards.
  5. 消息收发代理将接收方的多个实例组合在一起,并将它们视为同一逻辑接收方。阿帕奇 例如,Kafka 使用术语 consumer group。消息代理将每个分片分配给单个接收方。当接收器启动和关闭时,它会重新分配分片。
  6. The messaging broker groups together multiple instances of a receiver and treats them as the same logical receiver. Apache Kafka, for example, uses the term consumer group. The message broker assigns each shard to a single receiver. It reassigns shards when receivers start up and shut down.

图 3.11.通过使用分片(分区)消息通道,在保留消息顺序的同时扩展使用者。发件人包括 消息中的分片键。消息代理将消息写入由分片键确定的分片。消息代理 将每个分区分配给 Replicated Receiver 的一个实例。

在此示例中,每个事件消息都具有 as 其分片键。特定订单的每个事件都会发布到同一个分片,该分片由单个使用者实例读取。 因此,可以保证按顺序处理这些消息。OrderorderId

In this example, each Order event message has the orderId as its shard key. Each event for a particular order is published to the same shard, which is read by a single consumer instance. As a result, these messages are guaranteed to be processed in order.

3.3.6. 处理重复消息

3.3.6. Handling duplicate messages

使用消息传递时必须解决的另一个挑战是处理重复消息。理想情况下,消息代理应该 每条消息只传送一次,但保证恰好一次的消息传送通常成本太高。相反,大多数消息代理 承诺至少传递一次消息。

Another challenge you must tackle when using messaging is dealing with duplicate messages. A message broker should ideally deliver each message only once, but guaranteeing exactly-once messaging is usually too costly. Instead, most message brokers promise to deliver a message at least once.

当系统正常工作时,保证至少传递一次的消息代理将仅传递每条消息 一次。但是,客户端、网络或消息代理的故障可能会导致消息被多次传送。说一个 客户端在处理消息并更新其数据库后崩溃,但在确认消息之前崩溃。消息代理 将再次传递未确认的消息,无论是在 Client 端重新启动时将其传送到该客户端,还是传送到 Client 端的另一个副本。

When the system is working normally, a message broker that guarantees at-least-once delivery will deliver each message only once. But a failure of a client, network, or message broker can result in a message being delivered multiple times. Say a client crashes after processing a message and updating its database—but before acknowledging the message. The message broker will deliver the unacknowledged message again, either to that client when it restarts or to another replica of the client.

理想情况下,您应该使用在重新传送消息时保留顺序的消息代理。想象一下,客户端处理 一个事件后跟一个相同的 事件,并且不知何故该事件未被确认。消息代理应重新交付 和 事件。如果它仅重新传递 ,则客户端可以撤消取消 。Order CreatedOrder CancelledOrderOrder CreatedOrder CreatedOrder CancelledOrder CreatedOrder

Ideally, you should use a message broker that preserves ordering when redelivering messages. Imagine that the client processes an Order Created event followed by an Order Cancelled event for the same Order, and that somehow the Order Created event wasn’t acknowledged. The message broker should redeliver both the Order Created and Order Cancelled events. If it only redelivers the Order Created, the client may undo the cancelling of the Order.

有几种不同的方法可以处理重复的消息:

There are a couple of different ways to handle duplicate messages:

  • 编写幂等消息处理程序。
  • Write idempotent message handlers.
  • 跟踪消息并丢弃重复消息。
  • Track messages and discard duplicates.

让我们看看每个选项。

Let’s look at each option.

编写幂等消息处理程序

如果处理消息的应用程序逻辑是幂等的,则重复的消息是无害的。如果使用相同的 input 值多次调用 Application logic 没有额外的影响,则它是幂等的。例如,取消已取消的 order 是幂等操作。使用客户提供的 ID 创建订单也是如此。幂等消息处理程序可以是 安全地执行多次,前提是 Message Broker 在重新传送消息时保持顺序。

If the application logic that processes messages is idempotent, then duplicate messages are harmless. Application logic is idempotent if calling it multiple times with the same input values has no additional effect. For instance, cancelling an already-cancelled order is an idempotent operation. So is creating an order with a client-supplied ID. An idempotent message handler can be safely executed multiple times, provided that the message broker preserves ordering when redelivering messages.

遗憾的是,应用程序逻辑通常不是幂等的。或者,您可能正在使用不保留顺序的消息代理 重新投递邮件时。重复或不按顺序发送的消息可能会导致 bug。在这种情况下,您必须编写消息处理程序 ,用于跟踪消息并丢弃重复的消息。

Unfortunately, application logic is often not idempotent. Or you may be using a message broker that doesn’t preserve ordering when redelivering messages. Duplicate or out-of-order messages can cause bugs. In this situation, you must write message handlers that track messages and discard duplicate messages.

跟踪消息并丢弃重复消息

例如,考虑一个授权消费者信用卡的消息处理程序。它必须只授权卡一次 每个订单。此应用程序 logic 示例在每次调用时都有不同的效果。如果导致重复消息 消息处理程序多次执行此 logic,则应用程序将行为不正确。消息处理程序 执行此类应用程序逻辑必须通过检测和丢弃重复消息来实现幂等。

Consider, for example, a message handler that authorizes a consumer credit card. It must authorize the card exactly once for each order. This example of application logic has a different effect each time it’s invoked. If duplicate messages caused the message handler to execute this logic multiple times, the application would behave incorrectly. The message handler that executes this kind of application logic must become idempotent by detecting and discarding duplicate messages.

一个简单的解决方案是让消息使用者跟踪它使用 处理的消息,并丢弃任何重复项。例如,它可以将它使用的每条消息的 存储在数据库表中。图 3.12 显示了如何使用专用表执行此操作。message idmessage id

A simple solution is for a message consumer to track the messages that it has processed using the message id and discard any duplicates. It could, for example, store the message id of each message that it consumed in a database table. Figure 3.12 shows how to do this using a dedicated table.

图 3.12.Consumer 通过将处理后的消息 ID 记录在数据库表中来检测并丢弃重复消息。如果消息 之前已经处理过,则 PROCESSED_MESSAGES 表中的 INSERT 将失败。

当使用者处理消息时,它会将 记录在数据库表中,作为创建和更新业务实体的事务的一部分。在此示例中,使用者 将包含 的行插入到表中。如果消息是重复的,则 将失败,使用者可以丢弃该消息。message idmessage idPROCESSED_MESSAGESINSERT

When a consumer handles a message, it records the message id in the database table as part of the transaction that creates and updates business entities. In this example, the consumer inserts a row containing the message id into a PROCESSED_MESSAGES table. If a message is a duplicate, the INSERT will fail and the consumer can discard the message.

另一个选项是让消息处理程序在 application table 而不是专用表中记录 s。当使用 NoSQL 数据库时,此方法特别有用,因为 具有有限的事务模型,因此它不支持将两个表作为数据库事务的一部分进行更新。第 7 章显示了这种方法的一个示例。message id

Another option is for a message handler to record message ids in an application table instead of a dedicated table. This approach is particularly useful when using a NoSQL database that has a limited transaction model, so it doesn’t support updating two tables as part of a database transaction. Chapter 7 shows an example of this approach.

3.3.7. 事务型消息传递

3.3.7. Transactional messaging

服务通常需要将消息作为更新数据库的事务的一部分发布。例如,在整个 Book 中,您会看到在创建或更新业务实体时发布域事件的服务示例。数据库 update 和消息的发送必须在事务中进行。否则,服务可能会更新数据库并 然后 crash,例如,在发送消息之前。如果服务没有以原子方式执行这两项操作,则失败 可能会使系统处于不一致的状态。

A service often needs to publish messages as part of a transaction that updates the database. For instance, throughout this book you see examples of services that publish domain events whenever they create or update business entities. Both the database update and the sending of the message must happen within a transaction. Otherwise, a service might update the database and then crash, for example, before sending the message. If the service doesn’t perform these two operations atomically, a failure could leave the system in an inconsistent state.

传统的解决方案是使用跨数据库和消息代理的分布式事务。但正如你所愿 在第 4 章中学习,分布式事务不是现代应用程序的好选择。此外,许多现代代理,例如 Apache Kafka 不支持分布式事务。

The traditional solution is to use a distributed transaction that spans the database and the message broker. But as you’ll learn in chapter 4, distributed transactions aren’t a good choice for modern applications. Moreover, many modern brokers such as Apache Kafka don’t support distributed transactions.

因此,应用程序必须使用不同的机制来可靠地发布消息。让我们看看它是如何运作的。

As a result, an application must use a different mechanism to reliably publish messages. Let’s look at how that works.

将数据库表用作消息队列

假设您的应用程序正在使用关系数据库。可靠地发布消息的一种简单方法是 以应用 Transactional outbox 模式。此模式使用数据库表作为临时消息队列。如图 3.13 所示,发送消息的服务有一个数据库表。作为创建、更新和删除业务对象的数据库事务的一部分,该服务通过将消息插入到表中来发送消息。原子性是有保证的,因为这是一个本地 ACID 事务。OUTBOXOUTBOX

Let’s imagine that your application is using a relational database. A straightforward way to reliably publish messages is to apply the Transactional outbox pattern. This pattern uses a database table as a temporary message queue. As figure 3.13 shows, a service that sends messages has an OUTBOX database table. As part of the database transaction that creates, updates, and deletes business objects, the service sends messages by inserting them into the OUTBOX table. Atomicity is guaranteed because this is a local ACID transaction.

图 3.13.服务通过将消息作为更新数据库的事务的一部分插入到表中来可靠地发布消息。读取表并将消息发布到消息代理。OUTBOXMessage RelayOUTBOX

该表充当临时消息队列。它是一个读取表并将消息发布到消息代理的组件。OUTBOXMessageRelayOUTBOX

The OUTBOX table acts a temporary message queue. The MessageRelay is a component that reads the OUTBOX table and publishes the messages to a message broker.

模式:事务性发件箱

通过将事件或消息保存在数据库中,将事件或消息作为数据库事务的一部分发布。请参阅 http://microservices.io/patterns/data/transactional-outbox.htmlOUTBOX

Publish an event or message as part of a database transaction by saving it in an OUTBOX in the database. See http://microservices.io/patterns/data/transactional-outbox.html.

您可以对某些 NoSQL 数据库使用类似的方法。作为 存储在数据库中的每个业务实体都有一个属性,该属性是需要发布的消息列表。当服务更新 数据库,它会向该列表附加一条消息。这是原子的,因为它是通过单个数据库操作完成的。挑战, 不过,是有效地找到那些具有事件的业务实体并发布它们。record

You can use a similar approach with some NoSQL databases. Each business entity stored as a record in the database has an attribute that is a list of messages that need to be published. When a service updates an entity in the database, it appends a message to that list. This is atomic because it’s done with a single database operation. The challenge, though, is efficiently finding those business entities that have events and publishing them.

有几种不同的方法可以将消息从数据库移动到消息代理。我们将逐一介绍。

There are a couple of different ways to move messages from the database to the message broker. We’ll look at each one.

使用 Polling 发布者模式发布事件

如果应用程序使用关系数据库,则发布插入到表中的消息的一种非常简单的方法是轮询表中未发布的消息。它会定期查询表:OUTBOXMessageRelay

If the application uses a relational database, a very simple way to publish the messages inserted into the OUTBOX table is for the MessageRelay to poll the table for unpublished messages. It periodically queries the table:

SELECT * FROM OUTBOX ORDERED BY ... ASC
SELECT * FROM OUTBOX ORDERED BY ... ASC

接下来,将这些消息发布到消息代理,将一条消息发送到其目标消息通道。最后,它会删除那些 来自表的消息:MessageRelayOUTBOX

Next, the MessageRelay publishes those messages to the message broker, sending one to its destination message channel. Finally, it deletes those messages from the OUTBOX table:

BEGIN
 DELETE FROM OUTBOX WHERE ID in (....)
COMMIT
BEGIN
 DELETE FROM OUTBOX WHERE ID in (....)
COMMIT
模式:轮询发布者

通过轮询数据库中的发件箱来发布消息。请参阅 http://microservices.io/patterns/data/polling-publisher.html

Publish messages by polling the outbox in the database. See http://microservices.io/patterns/data/polling-publisher.html.

轮询数据库是一种简单的方法,在小规模下效果相当好。缺点是轮询频繁 数据库可能很昂贵。此外,您是否可以将此方法用于 NoSQL 数据库取决于其查询功能。 这是因为应用程序必须查询业务实体,而不是查询表,而这可能有效,也可能不可能。由于这些缺点和限制,它是 通常,使用更复杂、性能更高的数据库事务跟踪方法通常更好,在某些情况下也是必要的 日志。OUTBOX

Polling the database is a simple approach that works reasonably well at low scale. The downside is that frequently polling the database can be expensive. Also, whether you can use this approach with a NoSQL database depends on its querying capabilities. That’s because rather than querying an OUTBOX table, the application must query the business entities, and that may or may not be possible to do efficiently. Because of these drawbacks and limitations, it’s often better—and in some cases, necessary—to use the more sophisticated and performant approach of tailing the database transaction log.

通过应用 Transaction log tailing 模式发布事件

一个复杂的解决方案是跟踪数据库事务日志(也称为提交日志)。应用程序所做的每个提交的更新都表示为 数据库事务日志中的一个条目。事务日志矿工可以读取事务日志并将每个更改发布为 向 Message Broker 发送的消息。图 3.14 显示了这种方法的工作原理。MessageRelay

A sophisticated solution is for MessageRelay to tail the database transaction log (also called the commit log). Every committed update made by an application is represented as an entry in the database’s transaction log. A transaction log miner can read the transaction log and publish each change as a message to the message broker. Figure 3.14 shows how this approach works.

图 3.14.服务通过挖掘数据库的事务日志来发布插入到表中的消息。OUTBOX

读取事务日志条目。它将与插入的消息对应的每个相关日志条目转换为消息 并将该消息发布到消息代理。此方法可用于发布写入 RDBMS 中的表的消息或附加到 NoSQL 数据库中记录的消息。Transaction Log MinerOUTBOX

The Transaction Log Miner reads the transaction log entries. It converts each relevant log entry corresponding to an inserted message into a message and publishes that message to the message broker. This approach can be used to publish messages written to an OUTBOX table in an RDBMS or messages appended to records in a NoSQL database.

模式:事务日志拖尾

通过跟踪事务日志来发布对数据库所做的更改。请参阅 http://microservices.io/patterns/data/transaction-log-tailing.html

Publish changes made to the database by tailing the transaction log. See http://microservices.io/patterns/data/transaction-log-tailing.html.

以下是使用此方法的几个示例:

There are a few examples of this approach in use:

  • Debeziumhttp://debezium.io) — 一个开源项目,用于将数据库更改发布到 Apache Kafka 消息代理。
  • Debezium (http://debezium.io)—An open source project that publishes database changes to the Apache Kafka message broker.
  • LinkedIn Databushttps://github.com/linkedin/databus) — 一个开源项目,用于挖掘 Oracle 事务日志并将更改作为事件发布。LinkedIn 使用 Databus 将各种派生数据存储与记录系统同步。
  • LinkedIn Databus (https://github.com/linkedin/databus)—An open source project that mines the Oracle transaction log and publishes the changes as events. LinkedIn uses Databus to synchronize various derived data stores with the system of record.
  • DynamoDB 流http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html) – DynamoDB 流包含对 DynamoDB 中的项目进行的按时间排序的更改 (创建、更新和删除) 序列 表。应用程序可以从流中读取这些更改,例如,将它们发布为事件。
  • DynamoDB streams (http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html)—DynamoDB streams contain the time-ordered sequence of changes (creates, updates, and deletes) made to the items in a DynamoDB table in the last 24 hours. An application can read those changes from the stream and, for example, publish them as events.
  • Eventuate Tramhttps://github.com/eventuate-tram/eventuate-tram-core) — 您的作者自己的开源事务消息传递库,它使用 MySQL 二进制日志协议、Postgres WAL 或轮询 读取对表所做的更改并将其发布到 Apache Kafka。OUTBOX
  • Eventuate Tram (https://github.com/eventuate-tram/eventuate-tram-core)—Your author’s very own open source transaction messaging library that uses MySQL binlog protocol, Postgres WAL, or polling to read changes made to an OUTBOX table and publish them to Apache Kafka.

虽然这种方法很晦涩,但它的效果非常好。挑战在于实现它需要一些开发 努力。例如,您可以编写调用特定于数据库的 API 的低级代码。或者,您也可以使用打开的 源框架(如 Debezium),用于将应用程序对 MySQL、Postgres 或 MongoDB 所做的更改发布到 Apache Kafka。 使用 Debezium 的缺点是它的重点是捕获数据库级别的更改,以及用于发送和 接收消息超出其范围。这就是我创建 Eventuate Tram 框架的原因,它提供消息传递 API 以及事务跟踪和轮询。

Although this approach is obscure, it works remarkably well. The challenge is that implementing it requires some development effort. You could, for example, write low-level code that calls database-specific APIs. Alternatively, you could use an open source framework such as Debezium that publishes changes made by an application to MySQL, Postgres, or MongoDB to Apache Kafka. The drawback of using Debezium is that its focus is capturing changes at the database level and that APIs for sending and receiving messages are outside of its scope. That’s why I created the Eventuate Tram framework, which provides the messaging APIs as well as transaction tailing and polling.

3.3.8. 消息传递库和框架

3.3.8. Libraries and frameworks for messaging

服务需要使用库来发送和接收消息。一种方法是使用消息代理的客户端库 尽管直接使用这样的库存在几个问题:

A service needs to use a library to send and receive messages. One approach is to use the message broker’s client library, although there are several problems with using such a library directly:

  • 客户端库耦合了将消息发布到消息代理 API 的业务逻辑。
  • The client library couples business logic that publishes messages to the message broker APIs.
  • 消息代理的客户端库通常是低级的,需要多行代码来发送或接收消息。如 作为开发人员,您不想重复编写样板代码。另外,作为本书的作者,我不想要这个例子 代码杂乱无章,充斥着低级样板。
  • A message broker’s client library is typically low level and requires many lines of code to send or receive a message. As a developer, you don’t want to repeatedly write boilerplate code. Also, as the author of this book I don’t want the example code cluttered with low-level boilerplate.
  • 客户端库通常只提供发送和接收消息的基本机制,不支持更高级别的 交互样式。
  • The client library usually provides only the basic mechanism to send and receive messages and doesn’t support the higher-level interaction styles.

更好的方法是使用更高级别的库或框架,该库或框架隐藏了低级细节并直接支持 更高级别的交互样式。为简单起见,本书中的示例使用了我的 Eventuate Tram 框架。它有一个简单的 易于理解的 API,隐藏了使用消息代理的复杂性。除了用于发送和接收消息的 API 外,Eventuate Tram 还支持更高级别的交互风格,例如异步请求 / 响应 以及域事件发布。

A better approach is to use a higher-level library or framework that hides the low-level details and directly supports the higher-level interaction styles. For simplicity, the examples in this book use my Eventuate Tram framework. It has a simple, easy-to-understand API that hides the complexity of using the message broker. Besides an API for sending and receiving messages, Eventuate Tram also supports higher-level interaction styles such as asynchronous request/response and domain event publishing.

什么!?为什么选择 Eventuate 框架?

本书中的代码示例使用了我为事务性消息传递、事件溯源、 和传纪。我选择使用我的框架,因为与依赖注入和 Spring 框架不同,它有 微服务架构所需的许多功能没有广泛采用的框架。如果没有 Eventuate Tram 框架, 许多示例必须直接使用低级消息传递 API,这使得它们更加复杂并且模糊了重要的 概念。或者他们会使用一个没有被广泛采用的框架,这也会引发批评。

The code samples in this book use the open source Eventuate frameworks I’ve developed for transactional messaging, event sourcing, and sagas. I chose to use my frameworks because, unlike with, say, dependency injection and the Spring framework, there are no widely adopted frameworks for many of the features the microservice architecture requires. Without the Eventuate Tram framework, many examples would have to use the low-level messaging APIs directly, making them much more complicated and obscuring important concepts. Or they would use a framework that isn’t widely adopted, which would also provoke criticism.

相反,这些示例使用了 Eventuate Tram 框架,该框架具有一个简单易懂的 API 来隐藏实现 详。您可以在应用程序中使用这些框架。或者,您可以研究 Eventuate Tram 框架和 自己重新实现这些概念。

Instead, the examples use the Eventuate Tram frameworks, which have a simple, easy-to-understand API that hides the implementation details. You can use these frameworks in your applications. Alternatively, you can study the Eventuate Tram frameworks and reimplement the concepts yourself.

Eventuate Tram 还实现了两个重要的机制:

Eventuate Tram also implements two important mechanisms:

  • 事务性消息传递它将消息作为数据库事务的一部分发布。
  • Transactional messagingIt publishes messages as part of a database transaction.
  • 重复邮件检测Eventuate Tram 消息使用者检测并丢弃重复消息,这对于确保使用者 只处理一次消息,如 Section 3.3.6 中所述。
  • Duplicate message detectionThe Eventuate Tram message consumer detects and discards duplicate messages, which is essential for ensuring that a consumer processes messages exactly once, as discussed in section 3.3.6.

让我们看一下 Eventuate Tram API。

Let’s take a look at the Eventuate Tram APIs.

基本消息收发

基本消息收发 API 由两个 Java 接口组成:和 。生产者服务使用该接口将消息发布到消息通道。以下是使用此接口的示例:MessageProducerMessageConsumerMessageProducer

The basic messaging API consists of two Java interfaces: MessageProducer and MessageConsumer. A producer service uses the MessageProducer interface to publish messages to message channels. Here’s an example of using this interface:

MessageProducer messageProducer = ...;
String channel = ...;
String payload = ...;
messageProducer.send(destination, MessageBuilder.withPayload(payload).build())
MessageProducer messageProducer = ...;
String channel = ...;
String payload = ...;
messageProducer.send(destination, MessageBuilder.withPayload(payload).build())

使用者服务使用该接口订阅消息:MessageConsumer

A consumer service uses the MessageConsumer interface to subscribe to messages:

MessageConsumer messageConsumer;
messageConsumer.subscribe(subscriberId, Collections.singleton(destination),
     message -> { ... })
MessageConsumer messageConsumer;
messageConsumer.subscribe(subscriberId, Collections.singleton(destination),
     message -> { ... })

MessageProducer,并且是用于异步请求/响应和域事件发布的更高级别 API 的基础。MessageConsumer

MessageProducer and MessageConsumer are the foundation of the higher-level APIs for asynchronous request/response and domain event publishing.

我们来谈谈如何发布和订阅事件。

Let’s talk about how to publish and subscribe to events.

域事件发布

Eventuate Tram 具有用于发布和使用域事件的 API。第 5 章介绍了域事件是聚合(业务对象)在创建、更新或删除时发出的事件。服务使用接口发布域事件。下面是一个示例:DomainEventPublisher

Eventuate Tram has APIs for publishing and consuming domain events. Chapter 5 explains that domain events are events that are emitted by an aggregate (business object) when it’s created, updated, or deleted. A service publishes a domain event using the DomainEventPublisher interface. Here is an example:

DomainEventPublisher domainEventPublisher;

String accountId = ...;

DomainEvent domainEvent = new AccountDebited(...);

domainEventPublisher.publish("Account", accountId, Collections.singletonList(
     domainEvent));
DomainEventPublisher domainEventPublisher;

String accountId = ...;

DomainEvent domainEvent = new AccountDebited(...);

domainEventPublisher.publish("Account", accountId, Collections.singletonList(
     domainEvent));

服务使用 .示例如下:DomainEventDispatcher

A service consumes domain events using the DomainEventDispatcher. An example follows:

DomainEventHandlers domainEventHandlers = DomainEventHandlersBuilder
            .forAggregateType("Order")
            .onEvent(AccountDebited.class, domainEvent -> { ... })
            .build();

new DomainEventDispatcher("eventDispatcherId",
            domainEventHandlers,
            messageConsumer);
DomainEventHandlers domainEventHandlers = DomainEventHandlersBuilder
            .forAggregateType("Order")
            .onEvent(AccountDebited.class, domainEvent -> { ... })
            .build();

new DomainEventDispatcher("eventDispatcherId",
            domainEventHandlers,
            messageConsumer);

事件并不是 Eventuate Tram 支持的唯一高级消息传递模式。它还支持基于命令/回复的消息传送。

Events aren’t the only high-level messaging pattern supported by Eventuate Tram. It also supports command/reply-based messaging.

基于命令/回复的消息收发

客户端可以使用该接口向服务发送命令消息。例如CommandProducer

A client can send a command message to a service using the CommandProducer interface. For example

CommandProducer commandProducer = ...;

Map<String, String> extraMessageHeaders = Collections.emptyMap();

String commandId = commandProducer.send("CustomerCommandChannel",
        new DoSomethingCommand(),
        "ReplyToChannel",
        extraMessageHeaders);
CommandProducer commandProducer = ...;

Map<String, String> extraMessageHeaders = Collections.emptyMap();

String commandId = commandProducer.send("CustomerCommandChannel",
        new DoSomethingCommand(),
        "ReplyToChannel",
        extraMessageHeaders);

服务使用 class 的命令消息。 使用接口订阅指定的事件。它将每个命令消息调度到相应的处理程序方法。这是 一个例子:CommandDispatcherCommandDispatcherMessageConsumer

A service consumes command messages using the CommandDispatcher class. CommandDispatcher uses the MessageConsumer interface to subscribe to specified events. It dispatches each command message to the appropriate handler method. Here’s an example:

CommandHandlers commandHandlers =CommandHandlersBuilder
            .fromChannel(commandChannel)
            .onMessage(DoSomethingCommand.class, (command) -
     > { ... ; return withSuccess(); })
            .build();

CommandDispatcher dispatcher = new CommandDispatcher("subscribeId",
     commandHandlers, messageConsumer, messageProducer);
CommandHandlers commandHandlers =CommandHandlersBuilder
            .fromChannel(commandChannel)
            .onMessage(DoSomethingCommand.class, (command) -
     > { ... ; return withSuccess(); })
            .build();

CommandDispatcher dispatcher = new CommandDispatcher("subscribeId",
     commandHandlers, messageConsumer, messageProducer);

在本书中,您将看到使用这些 API 发送和接收消息的代码示例。

Throughout this book, you’ll see code examples that use these APIs for sending and receiving messages.

如您所见,Eventuate Tram 框架为 Java 应用程序实现了事务性消息传递。它提供了一个低级 用于以事务方式发送和接收消息的 API。它还提供了用于发布和使用的更高级别的 API domain 事件以及发送和处理命令。

As you’ve seen, the Eventuate Tram framework implements transactional messaging for Java applications. It provides a low-level API for sending and receiving messages transactionally. It also provides the higher-level APIs for publishing and consuming domain events and for sending and processing commands.

现在,我们来看一下使用异步消息传递来提高可用性的服务设计方法。

Let’s now look at a service design approach that uses asynchronous messaging to improve availability.

3.4. 使用异步消息传递提高可用性

3.4. Using asynchronous messaging to improve availability

如您所见,各种 IPC 机制具有不同的权衡。一个特别的权衡是您对 IPC 的选择 机制影响可用性。在本节中,您将了解与其他服务的同步通信是 请求处理会降低应用程序可用性。因此,您应该将服务设计为使用异步消息传递 尽可能。

As you’ve seen, a variety of IPC mechanisms have different trade-offs. One particular trade-off is how your choice of IPC mechanism impacts availability. In this section, you’ll learn that synchronous communication with other services as part of request handling reduces application availability. As a result, you should design your services to use asynchronous messaging whenever possible.

让我们首先看一下同步通信的问题以及它如何影响可用性。

Let’s first look at the problem with synchronous communication and how it impacts availability.

3.4.1. 同步通信会降低可用性

3.4.1. Synchronous communication reduces availability

REST 是一种非常流行的 IPC 机制。您可能想将其用于服务间通信。问题 但是,REST 是一个同步协议:HTTP 客户端必须等待服务发送响应。Whenever 服务 使用同步协议进行通信,则应用程序的可用性会降低。

REST is an extremely popular IPC mechanism. You may be tempted to use it for interservice communication. The problem with REST, though, is that it’s a synchronous protocol: an HTTP client must wait for the service to send a response. Whenever services communicate using a synchronous protocol, the availability of the application is reduced.

要了解原因,请考虑图 3.15 中所示的场景。有一个 REST API,用于创建 .它调用 和 来验证 .这两项服务也都有 REST API。Order ServiceOrderConsumer ServiceRestaurant ServiceOrder

To see why, consider the scenario shown in figure 3.15. The Order Service has a REST API for creating an Order. It invokes the Consumer Service and the Restaurant Service to validate the Order. Both of those services also have REST APIs.

图 3.15.使用 REST 调用其他服务。这很简单,但它需要所有服务同时可用。 这会降低 API 的可用性。Order Service

创建订单的步骤顺序如下:

The sequence of steps for creating an order is as follows:

  1. 客户端向 .POST /ordersOrder Service
  2. Client makes an HTTP POST /orders request to the Order Service.
  3. Order Service通过向 .GET /consumers/idConsumer Service
  4. Order Service retrieves consumer information by making an HTTP GET /consumers/id request to the Consumer Service.
  5. Order Service通过向 .GET /restaurant/idRestaurant Service
  6. Order Service retrieves restaurant information by making an HTTP GET /restaurant/id request to the Restaurant Service.
  7. Order Taking使用 Consumer 和 Restaurant 信息验证请求。
  8. Order Taking validates the request using the consumer and restaurant information.
  9. Order Taking创建一个 Order。
  10. Order Taking creates an Order.
  11. Order Taking向客户端发送 HTTP 响应。
  12. Order Taking sends an HTTP response to the client.

由于这些服务使用 HTTP,因此它们必须同时可用,以便 FTGO 应用程序处理请求。如果这三项服务中的任何一项关闭,FTGO 应用程序将无法创建订单。从数学上讲, 系统操作的可用性是该操作调用的服务可用性的乘积。 如果 和 它调用的两个服务的可用性为 99.5%,则总体可用性为 99.5%CreateOrderOrder Service3= 98.5%,这要低得多。参与处理请求的每个附加服务都会进一步降低可用性。

Because these services use HTTP, they must all be simultaneously available in order for the FTGO application to process the CreateOrder request. The FTGO application couldn’t create orders if any one of these three services is down. Mathematically speaking, the availability of a system operation is the product of the availability of the services that are invoked by that operation. If the Order Service and the two services that it invokes are 99.5% available, the overall availability is 99.5%3 = 98.5%, which is significantly less. Each additional service that participates in handling a request further reduces availability.

此问题并非特定于基于 REST 的通信。每当服务只能响应其 client 在收到来自其他服务的响应后。即使服务使用请求/响应进行通信,也存在此问题 样式交互。例如,如果 通过消息代理向 发送消息,然后等待响应,则 的可用性会降低。Order ServiceConsumer Service

This problem isn’t specific to REST-based communication. Availability is reduced whenever a service can only respond to its client after receiving a response from another service. This problem exists even if services communicate using request/response style interaction over asynchronous messaging. For example, the availability of the Order Service would be reduced if it sent a message to the Consumer Service via a message broker and then waited for a response.

如果要最大程度地提高可用性,则必须最大程度地减少同步通信的数量。让我们看看如何做到这一点。

If you want to maximize availability, you must minimize the amount of synchronous communication. Let’s look at how to do that.

3.4.2. 消除同步交互

3.4.2. Eliminating synchronous interaction

在处理同步时,有几种不同的方法可以减少与其他服务的同步通信量 请求。一种解决方案是通过定义只有异步 API 的服务来完全避免这个问题。那不是 不过,总是可能的。例如,公共 API 通常是 RESTful。因此,有时需要服务具有 同步 API 的 API 进行同步。

There are a few different ways to reduce the amount of synchronous communication with other services while handling synchronous requests. One solution is to avoid the problem entirely by defining services that only have asynchronous APIs. That’s not always possible, though. For example, public APIs are commonly RESTful. Services are therefore sometimes required to have synchronous APIs.

幸运的是,有一些方法可以在不发出同步请求的情况下处理同步请求。让我们谈谈选项。

Fortunately, there are ways to handle synchronous requests without making synchronous requests. Let’s talk about the options.

使用异步交互样式

理想情况下,所有交互都应该使用本章前面描述的异步交互样式来完成。为 例如,假设 FTGO 应用程序的客户端使用异步请求/异步响应交互样式来创建 订单。客户端通过向 .然后,此服务与其他服务异步交换消息,并最终向客户端发送回复消息。图 3.16 显示了设计。Order Service

Ideally, all interactions should be done using the asynchronous interaction styles described earlier in this chapter. For example, say a client of the FTGO application used an asynchronous request/asynchronous response style of interaction to create orders. A client creates an order by sending a request message to the Order Service. This service then asynchronously exchanges messages with other services and eventually sends a reply message to the client. Figure 3.16 shows the design.

图 3.16.如果 FTGO 应用程序的服务使用异步消息传递而不是同步消息传递进行通信,则 FTGO 应用程序具有更高的可用性 调用。

客户端和服务通过消息传递通道发送消息进行异步通信。没有参与者 交互总是被阻止,等待响应。

The client and the services communicate asynchronously by sending messages via messaging channels. No participant in this interaction is ever blocked waiting for a response.

这样的架构将具有极强的弹性,因为消息代理会缓冲消息,直到可以使用消息为止。 但是,问题在于,服务通常具有使用同步协议(如 REST)的外部 API,因此它必须 立即响应请求。

Such an architecture would be extremely resilient, because the message broker buffers messages until they can be consumed. The problem, however, is that services often have an external API that uses a synchronous protocol such as REST, so it must respond to requests immediately.

如果服务具有同步 API,则提高可用性的一种方法是复制数据。让我们看看它是如何工作的。

If a service has a synchronous API, one way to improve availability is to replicate data. Let’s see how that works.

复制数据

在请求处理期间最大限度地减少同步请求的一种方法是复制数据。服务维护 处理请求时所需的数据。它通过订阅 拥有数据的服务。例如,可以维护 和 拥有的数据副本。这将能够处理创建订单的请求,而无需与这些服务交互。图 3.17 显示了设计。Order ServiceConsumer ServiceRestaurant ServiceOrder Service

One way to minimize synchronous requests during request processing is to replicate data. A service maintains a replica of the data that it needs when processing requests. It keeps the replica up-to-date by subscribing to events published by the services that own the data. For example, Order Service could maintain a replica of data owned by Consumer Service and Restaurant Service. This would enable Order Service to handle a request to create an order without having to interact with those services. Figure 3.17 shows the design.

图 3.17.是自包含的,因为它具有 Consumer 和 Restaurant 数据的副本。Order Service

Consumer Service并在其数据发生更改时发布事件。 订阅这些事件并更新其副本。Restaurant ServiceOrder Service

Consumer Service and Restaurant Service publish events whenever their data changes. Order Service subscribes to those events and updates its replica.

在某些情况下,复制数据是一种有用的方法。例如,第 5 章描述了如何从中复制数据,以便它可以验证菜单项并为其定价。复制的一个缺点是它有时可能需要复制 的 S Alpha S Package,效率低下。例如,由于使用者数量众多,维护 拥有的数据的副本可能不切实际。复制的另一个缺点是,它无法解决服务如何更新其他服务拥有的数据的问题。Order ServiceRestaurant ServiceOrder ServiceConsumer Service

In some situations, replicating data is a useful approach. For example, chapter 5 describes how Order Service replicates data from Restaurant Service so that it can validate and price menu items. One drawback of replication is that it can sometimes require the replication of large amounts of data, which is inefficient. For example, it may not be practical for Order Service to maintain a replica of the data owned by Consumer Service, due to the large number of consumers. Another drawback of replication is that it doesn’t solve the problem of how a service updates data owned by other services.

解决该问题的一种方法是让服务延迟与其他服务的交互,直到它响应其客户端之后。 我们接下来将看看它是如何工作的。

One way to solve that problem is for a service to delay interacting with other services until after it responds to its client. We’ll next look at how that works.

返回响应后完成处理

在请求处理期间消除同步通信的另一种方法是让服务按如下方式处理请求:

Another way to eliminate synchronous communication during request processing is for a service to handle a request as follows:

  1. 仅使用本地可用的数据验证请求。
  2. Validate the request using only the data available locally.
  3. 更新其数据库,包括将消息插入表中。OUTBOX
  4. Update its database, including inserting messages into the OUTBOX table.
  5. 将响应返回给其客户端。
  6. Return a response to its client.

在处理请求时,该服务不会与任何其他服务同步交互。相反,它会异步发送 消息发送到其他服务。此方法可确保服务松散耦合。正如您将在下一章中学到的那样, 这通常使用 Saga 实现。

While handling a request, the service doesn’t synchronously interact with any other services. Instead, it asynchronously sends messages to other services. This approach ensures that the services are loosely coupled. As you’ll learn in the next chapter, this is often implemented using a saga.

例如,如果使用此方法,它会在某个状态中创建一个订单,然后通过与其他服务交换消息来异步验证该订单。图 3.18 显示了调用操作时发生的情况。事件顺序如下:Order ServicePENDINGcreateOrder()

For example, if Order Service uses this approach, it creates an order in a PENDING state and then validates the order asynchronously by exchanging messages with other services. Figure 3.18 shows what happens when the createOrder() operation is invoked. The sequence of events is as follows:

  1. Order Service在状态中创建 Order。PENDING
  2. Order Service creates an Order in a PENDING state.
  3. Order Service向其客户端返回包含订单 ID 的响应。
  4. Order Service returns a response to its client containing the order ID.
  5. Order Service向 发送消息。ValidateConsumerInfoConsumer Service
    图 3.18.在不调用任何其他服务的情况下创建 Order。然后,它通过与其他服务(包括 和 )交换消息来异步验证新创建的 API。Order ServiceOrderConsumer ServiceRestaurant Service

  6. Order Service sends a ValidateConsumerInfo message to Consumer Service.
    Figure 3.18. Order Service creates an order without invoking any other service. It then asynchronously validates the newly created Order by exchanging messages with other services, including Consumer Service and Restaurant Service.

  7. Order Service向 发送消息。ValidateOrderDetailsRestaurant Service
  8. Order Service sends a ValidateOrderDetails message to Restaurant Service.
  9. Consumer Service收到一条消息,验证 Consumer 是否可以下单,并向 发送消息。ValidateConsumerInfoConsumerValidatedOrder Service
  10. Consumer Service receives a ValidateConsumerInfo message, verifies the consumer can place an order, and sends a ConsumerValidated message to Order Service.
  11. Restaurant Service收到一条消息,验证菜单项是否有效,以及餐厅是否可以将餐点运送到订单的送货地址,然后发送 向 .ValidateOrderDetailsOrderDetailsValidatedOrder Service
  12. Restaurant Service receives a ValidateOrderDetails message, verifies the menu item are valid and that the restaurant can deliver to the order’s delivery address, and sends an OrderDetailsValidated message to Order Service.
  13. Order Service接收 和 并将订单的状态更改为 。ConsumerValidatedOrderDetailsValidatedVALIDATED
  14. Order Service receives ConsumerValidated and OrderDetailsValidated and changes the state of the order to VALIDATED.
  15. ...
  16. ...

Order Service可以按任一顺序接收 and 消息。它通过更改订单的状态来跟踪它首先收到的消息。如果它收到 第一种情况是将 Order 的状态更改为 ,而如果它先收到消息,则将其状态更改为 。 将 的状态更改为 接收到另一条消息时的状态。ConsumerValidatedOrderDetailsValidatedConsumerValidatedCONSUMER_VALIDATEDOrderDetailsValidatedORDER_DETAILS_VALIDATEDOrder ServiceOrderVALIDATED

Order Service can receive the ConsumerValidated and OrderDetailsValidated messages in either order. It keeps track of which message it receives first by changing the state of the order. If it receives the ConsumerValidated first, it changes the state of the order to CONSUMER_VALIDATED, whereas if it receives the OrderDetailsValidated message first, it changes its state to ORDER_DETAILS_VALIDATED. Order Service changes the state of the Order to VALIDATED when it receives the other message.

验证 Order 后,完成 Order 创建过程的其余部分,将在下一章中讨论。这种方法的优点在于 例如,即使 IS DOWN 仍然创建订单并响应其客户。最终,将重新启动并处理任何排队的消息,并且将验证订单。Order ServiceConsumer ServiceOrder ServiceConsumer Service

After the Order has been validated, Order Service completes the rest of the order-creation process, discussed in the next chapter. What’s nice about this approach is that even if Consumer Service is down, for example, Order Service still creates orders and responds to its clients. Eventually, Consumer Service will come back up and process any queued messages, and orders will be validated.

服务在完全处理请求之前响应的一个缺点是,它会使客户端更加复杂。例如,当新创建的订单返回响应时,对新创建的订单的状态做出最低限度的保证。它创建订单并返回 在验证订单并授权消费者的信用卡之前。因此,为了让客户端 要知道订单是否已成功创建,它必须定期轮询或必须向其发送通知消息。尽管听起来很复杂,但在许多情况下,这是首选方法,尤其是 因为它还解决了我在下一章中讨论的分布式事务管理问题。例如,在第 4 章第 5 章中,我描述了如何使用这种方法。Order ServiceOrder ServiceOrder Service

A drawback of a service responding before fully processing a request is that it makes the client more complex. For example, Order Service makes minimal guarantees about the state of a newly created order when it returns a response. It creates the order and returns immediately before validating the order and authorizing the consumer’s credit card. Consequently, in order for the client to know whether the order was successfully created, either it must periodically poll or Order Service must send it a notification message. As complex as it sounds, in many situations this is the preferred approach—especially because it also addresses the distributed transaction management issues I discuss in the next chapter. In chapters 4 and 5, for example, I describe how Order Service uses this approach.

总结

Summary

  • 微服务架构是分布式架构,因此进程间通信起着关键作用。
  • The microservice architecture is a distributed architecture, so interprocess communication plays a key role.
  • 仔细管理服务 API 的演变至关重要。向后兼容的更改最容易进行,因为 他们不会影响客户。如果您对服务的 API 进行了重大更改,它通常需要同时支持旧的 和新版本,直到其客户端升级为止。
  • It’s essential to carefully manage the evolution of a service’s API. Backward-compatible changes are the easiest to make because they don’t impact clients. If you make a breaking change to a service’s API, it will typically need to support both the old and new versions until its clients have been upgraded.
  • IPC 技术有很多种,每种技术都有不同的权衡。一个关键的设计决策是选择同步 remote 过程调用模式或异步消息收发模式。基于同步远程过程调用的协议, 比如 REST,是最容易使用的。但理想情况下,服务应该使用异步消息传递进行通信,以便提高 可用性。
  • There are numerous IPC technologies, each with different trade-offs. One key design decision is to choose either a synchronous remote procedure invocation pattern or the asynchronous Messaging pattern. Synchronous remote procedure invocation-based protocols, such as REST, are the easiest to use. But services should ideally communicate using asynchronous messaging in order to increase availability.
  • 为了防止故障在系统中级联,必须设计一个使用同步协议的服务客户端 处理部分故障,即调用的服务关闭或表现出高延迟时。特别是,它 发出请求时必须使用 timeouts,限制未完成请求的数量,并使用 Circuit breaker 模式来避免 调用失败的服务。
  • In order to prevent failures from cascading through a system, a service client that uses a synchronous protocol must be designed to handle partial failures, which are when the invoked service is either down or exhibiting high latency. In particular, it must use timeouts when making requests, limit the number of outstanding requests, and use the Circuit breaker pattern to avoid making calls to a failing service.
  • 使用同步协议的架构必须包含服务发现机制,以便客户端确定 服务实例的网络位置。最简单的方法是使用由 部署平台:服务器端发现和第三方注册模式。但另一种方法是 在应用程序级别实施服务发现:Client-side discovery 和 Self registration 模式。这需要更多的工作,但它确实可以处理服务在多个部署平台上运行的情况。
  • An architecture that uses synchronous protocols must include a service discovery mechanism in order for clients to determine the network location of a service instance. The simplest approach is to use the service discovery mechanism implemented by the deployment platform: the Server-side discovery and 3rd party registration patterns. But an alternative approach is to implement service discovery at the application level: the Client-side discovery and Self registration patterns. It’s more work, but it does handle the scenario where services are running on multiple deployment platforms.
  • 设计基于消息传递的体系结构的一个好方法是使用消息和通道模型,该模型抽象了详细信息 的基础消息传递系统。然后,您可以将该设计映射到特定的消息传递基础架构,该基础架构通常为 基于 Message Broker 的 Broker 的 Broker 中。
  • A good way to design a messaging-based architecture is to use the messages and channels model, which abstracts the details of the underlying messaging system. You can then map that design to a specific messaging infrastructure, which is typically message broker–based.
  • 使用消息传递时的一个关键挑战是原子更新数据库和发布消息。一个好的解决方案是 使用 Transactional outbox 模式,并首先将消息作为数据库事务的一部分写入数据库。单独的 进程然后使用 Polling 发布者模式或 Transaction log tailing 从数据库中检索消息 模式并将其发布到消息代理。
  • One key challenge when using messaging is atomically updating the database and publishing a message. A good solution is to use the Transactional outbox pattern and first write the message to the database as part of the database transaction. A separate process then retrieves the message from the database using either the Polling publisher pattern or the Transaction log tailing pattern and publishes it to the message broker.

第 4 章.使用 saga 管理事务

Chapter 4. Managing transactions with sagas

本章涵盖

This chapter covers

  • 为什么分布式事务不适合现代应用程序
  • Why distributed transactions aren’t a good fit for modern applications
  • 使用 Saga 模式在微服务架构中保持数据一致性
  • Using the Saga pattern to maintain data consistency in a microservice architecture
  • 使用 Choreography 和 orchestration 协调 Sagas
  • Coordinating sagas using choreography and orchestration
  • 使用对策应对缺乏隔离
  • Using countermeasures to deal with the lack of isolation

当 Mary 开始研究微服务架构时,她最大的担忧之一是如何实现事务 跨多个服务。事务是每个企业应用程序的重要组成部分。无事务 不可能保持数据一致性。

When Mary started investigating the microservice architecture, one of her biggest concerns was how to implement transactions that span multiple services. Transactions are an essential ingredient of every enterprise application. Without transactions it would be impossible to maintain data consistency.

ACID(原子性、一致性、隔离性、持久性)事务通过提供 错觉,认为每个事务都对数据具有独占访问权限。在微服务架构中,位于 单个服务仍然可以使用 ACID 事务。然而,挑战在于实现运营事务 更新多个服务拥有的数据。例如,如第 2 章所述,该操作跨越许多服务,包括 、 和 。此类操作需要跨服务工作的事务管理机制。createOrder()Order ServiceKitchen ServiceAccounting Service

ACID (Atomicity, Consistency, Isolation, Durability) transactions greatly simplify the job of the developer by providing the illusion that each transaction has exclusive access to the data. In a microservice architecture, transactions that are within a single service can still use ACID transactions. The challenge, however, lies in implementing transactions for operations that update data owned by multiple services. For example, as described in chapter 2, the createOrder() operation spans numerous services, including Order Service, Kitchen Service, and Accounting Service. Operations such as these need a transaction management mechanism that works across services.

Mary 发现,如第 2 章所述,传统的分布式事务管理方法对于现代应用程序来说并不是一个好的选择。而不是 ACID 事务是一种跨服务的操作,必须使用所谓的 saga(消息驱动的本地事务序列)来保持数据一致性。传纪的一个挑战是它们 ACD(原子性、一致性、持久性)。它们缺乏传统 ACID 事务的隔离功能。因此,一个 应用程序必须使用所谓的对策,即防止或减少因缺乏隔离而导致的并发异常影响的设计技术。

Mary discovered that, as mentioned in chapter 2, the traditional approach to distributed transaction management isn’t a good choice for modern applications. Instead of an ACID transactions, an operation that spans services must use what’s known as a saga, a message-driven sequence of local transactions, to maintain data consistency. One challenge with sagas is that they are ACD (Atomicity, Consistency, Durability). They lack the isolation feature of traditional ACID transactions. As a result, an application must use what are known as countermeasures, design techniques that prevent or reduce the impact of concurrency anomalies caused by the lack of isolation.

在许多方面,Mary 和 FTGO 开发人员在采用微服务时面临的最大障碍是从 具有 ACID 事务的单个数据库到具有 ACD Sagas 的多数据库架构。他们已经习惯了 ACID 事务模型。但实际上,即使是整体式应用程序(如 FTGO 应用程序)通常也不使用 教科书 ACID 交易。例如,许多应用程序使用较低的事务隔离级别来提高性能。 此外,许多重要的业务流程,例如在不同银行的账户之间转账,最终是一致的。 甚至 Starbucks 也没有使用两阶段提交 (www.enterpriseintegrationpatterns.com/ramblings/18_starbucks.html)。

In many ways, the biggest obstacle that Mary and the FTGO developers will face when adopting microservices is moving from a single database with ACID transactions to a multi-database architecture with ACD sagas. They’re used to the simplicity of the ACID transaction model. But in reality, even monolithic applications such as the FTGO application typically don’t use textbook ACID transactions. For example, many applications use a lower transaction isolation level in order to improve performance. Also, many important business processes, such as transferring money between accounts at different banks, are eventually consistent. Not even Starbucks uses two-phase commit (www.enterpriseintegrationpatterns.com/ramblings/18_starbucks.html).

在本章的开头,我将介绍微服务架构中事务管理的挑战,并解释原因 传统的分布式事务管理方法不是一种选择。接下来,我将介绍如何保持数据一致性 使用 Sagas。之后,我研究了协调 Saga 的两种不同方法:编排(参与者在没有集中控制点的情况下交换事件)和编排(其中集中式控制器告诉 Saga 参与者要执行的操作)。我讨论如何使用对策 防止或减少因 Sagas 之间缺乏隔离而导致的并发异常的影响。最后,我描述 示例 Saga 的实现。

I begin this chapter by looking at the challenges of transaction management in the microservice architecture and explain why the traditional approach to distributed transaction management isn’t an option. Next I explain how to maintain data consistency using sagas. After that I look at the two different ways of coordinating sagas: choreography, where participants exchange events without a centralized point of control, and orchestration, where a centralized controller tells the saga participants what operation to perform. I discuss how to use countermeasures to prevent or reduce the impact of concurrency anomalies caused by the lack of isolation between sagas. Finally, I describe the implementation of an example saga.

让我们首先看一下在微服务架构中管理事务的挑战。

Let’s start by taking a look at the challenge of managing transactions in a microservice architecture.

4.1. 微服务架构中的事务管理

4.1. Transaction management in a microservice architecture

企业应用程序处理的几乎每个请求都在数据库事务中执行。企业应用程序 开发人员使用可简化事务管理的框架和库。一些框架和库提供了编程 用于显式开始、提交和回滚事务的 API。其他框架(如 Spring 框架)提供 一种声明性机制。Spring 提供了一个 Comments,用于安排在事务中自动执行方法调用。因此,编写事务性业务逻辑非常简单。@Transactional

Almost every request handled by an enterprise application is executed within a database transaction. Enterprise application developers use frameworks and libraries that simplify transaction management. Some frameworks and libraries provide a programmatic API for explicitly beginning, committing, and rolling back transactions. Other frameworks, such as the Spring framework, provide a declarative mechanism. Spring provides an @Transactional annotation that arranges for method invocations to be automatically executed within a transaction. As a result, it’s straightforward to write transactional business logic.

或者,更准确地说,在访问单个数据库的整体式应用程序中,事务管理非常简单。 在使用多个数据库和消息代理的复杂整体式应用程序中,事务管理更具挑战性。 在微服务架构中,事务跨越多个服务,每个服务都有自己的数据库。在这种情况下, 应用程序必须使用更复杂的机制来管理事务。正如您将学习的那样,使用 分布式事务不是现代应用程序的可行选项。相反,基于微服务的应用程序必须使用 传说。

Or, to be more precise, transaction management is straightforward in a monolithic application that accesses a single database. Transaction management is more challenging in a complex monolithic application that uses multiple databases and message brokers. And in a microservice architecture, transactions span multiple services, each of which has its own database. In this situation, the application must use a more elaborate mechanism to manage transactions. As you’ll learn, the traditional approach of using distributed transactions isn’t a viable option for modern applications. Instead, a microservices-based application must use sagas.

在解释 saga 之前,我们先看看为什么事务管理在微服务架构中具有挑战性。

Before I explain sagas, let’s first look at why transaction management is challenging in a microservice architecture.

4.1.1. 微服务架构中对分布式事务的需求

4.1.1. The need for distributed transactions in a microservice architecture

假设您是负责实现系统操作的 FTGO 开发人员。如第 2 章所述,此操作必须验证消费者是否可以下订单、验证订单详情、授权消费者的信用 卡,然后在数据库中创建一个。在整体式 FTGO 应用程序中实现此操作相对简单。所有 验证订单所需的数据很容易访问。此外,您可以使用 ACID 事务来确保数据一致性。 您可以在 service 方法上使用 Spring 的 Comments。createOrder()Order@TransactionalcreateOrder()

Imagine that you’re the FTGO developer responsible for implementing the createOrder() system operation. As described in chapter 2, this operation must verify that the consumer can place an order, verify the order details, authorize the consumer’s credit card, and create an Order in the database. It’s relatively straightforward to implement this operation in the monolithic FTGO application. All the data required to validate the order is readily accessible. What’s more, you can use an ACID transaction to ensure data consistency. You might use Spring’s @Transactional annotation on the createOrder() service method.

相比之下,在微服务架构中实现相同的操作要复杂得多。如图 4.1 所示,所需的数据分散在多个服务中。该操作访问许多服务中的数据。它从 、 和 中读取数据并更新数据。createOrder()Consumer ServiceOrder ServiceKitchen ServiceAccounting Service

In contrast, implementing the same operation in a microservice architecture is much more complicated. As figure 4.1 shows, the needed data is scattered around multiple services. The createOrder() operation accesses data in numerous services. It reads data from Consumer Service and updates data in Order Service, Kitchen Service, and Accounting Service.

图 4.1.该操作会更新多个服务中的数据。它必须使用一种机制来维护这些服务之间的数据一致性。createOrder()

由于每个服务都有自己的数据库,因此您需要使用一种机制来维护这些数据库之间的数据一致性。

Because each service has its own database, you need to use a mechanism to maintain data consistency across those databases.

4.1.2. 分布式事务的麻烦

4.1.2. The trouble with distributed transactions

在多个服务、数据库或消息代理之间保持数据一致性的传统方法是使用 分布式事务。分布式事务管理的事实标准是 X/Open 分布式事务 处理 (DTP) 模型 (X/Open XA — 请参阅 https://en.wikipedia.org/wiki/X/Open_XA)。XA 使用两阶段提交 (2PC) 来确保事务中的所有参与者要么提交,要么回滚。符合 XA 标准的技术堆栈包括 符合 XA 的数据库和消息代理、数据库驱动程序和消息传递 API,以及进程间通信机制 传播 XA 全局事务 ID。大多数 SQL 数据库都符合 XA 标准,一些消息代理也是如此。Java EE 应用程序 例如,可以使用 JTA 执行分布式事务。

The traditional approach to maintaining data consistency across multiple services, databases, or message brokers is to use distributed transactions. The de facto standard for distributed transaction management is the X/Open Distributed Transaction Processing (DTP) Model (X/Open XA—see https://en.wikipedia.org/wiki/X/Open_XA). XA uses two-phase commit (2PC) to ensure that all participants in a transaction either commit or rollback. An XA-compliant technology stack consists of XA-compliant databases and message brokers, database drivers, and messaging APIs, and an interprocess communication mechanism that propagates the XA global transaction ID. Most SQL databases are XA compliant, as are some message brokers. Java EE applications can, for example, use JTA to perform distributed transactions.

这听起来很简单,但分布式事务存在各种问题。一个问题是,许多现代技术, 包括 MongoDB 和 Cassandra 等 NoSQL 数据库,则不支持它们。此外,不支持分布式事务 由 RabbitMQ 和 Apache Kafka 等现代消息代理提供。因此,如果你坚持使用分布式事务, 你不能使用很多现代技术。

As simple as this sounds, there are a variety of problems with distributed transactions. One problem is that many modern technologies, including NoSQL databases such as MongoDB and Cassandra, don’t support them. Also, distributed transactions aren’t supported by modern message brokers such as RabbitMQ and Apache Kafka. As a result, if you insist on using distributed transactions, you can’t use many modern technologies.

分布式事务的另一个问题是它们是同步 IPC 的一种形式,这会降低可用性。挨次 要提交分布式事务,所有参与的服务都必须可用。如第 3 章所述,可用性是事务中所有参与者的可用性的乘积。如果分布式事务 涉及两个可用性为 99.5% 的服务,则总体可用性为 99%,这要低得多。每增加一个 service 会进一步降低可用性。甚至还有 Eric Brewer 的 CAP 定理,该定理 表示系统只能具有以下三个属性中的两个:一致性、可用性和分区容错性 (https://en.wikipedia.org/wiki/CAP_theorem)。今天,架构师更喜欢拥有一个可用的系统,而不是一个一致的系统。

Another problem with distributed transactions is that they are a form of synchronous IPC, which reduces availability. In order for a distributed transaction to commit, all the participating services must be available. As described in chapter 3, the availability is the product of the availability of all of the participants in the transaction. If a distributed transaction involves two services that are 99.5% available, then the overall availability is 99%, which is significantly less. Each additional service involved in a distributed transaction further reduces availability. There is even Eric Brewer’s CAP theorem, which states that a system can only have two of the following three properties: consistency, availability, and partition tolerance (https://en.wikipedia.org/wiki/CAP_theorem). Today, architects prefer to have a system that’s available rather than one that’s consistent.

从表面上看,分布式事务很有吸引力。从开发人员的角度来看,他们具有相同的编程模型 作为本地交易。但是,由于目前提到的问题,分布式事务并不是一个可行的技术 现代应用程序。第 3 章介绍了如何在不使用分布式事务的情况下将消息作为数据库事务的一部分发送。为了解决更多 在微服务架构中维护数据一致性的复杂问题,应用程序必须使用不同的机制 它建立在松散耦合的异步服务的概念之上。这就是传纪的用武之地。

On the surface, distributed transactions are appealing. From a developer’s perspective, they have the same programming model as local transactions. But because of the problems mentioned so far, distributed transactions aren’t a viable technology for modern applications. Chapter 3 described how to send messages as part of a database transaction without using distributed transactions. To solve the more complex problem of maintaining data consistency in a microservice architecture, an application must use a different mechanism that builds on the concept of loosely coupled, asynchronous services. This is where sagas come in.

4.1.3. 使用 Saga 模式保持数据一致性

4.1.3. Using the Saga pattern to maintain data consistency

Sagas 是在微服务架构中保持数据一致性的机制,无需使用分布式事务。 您需要为需要更新多个服务中的数据的每个 system 命令定义一个 saga。saga 是一系列本地事务。 每个本地事务都使用上述熟悉的 ACID 事务框架和库来更新单个服务中的数据 早些时候。

Sagas are mechanisms to maintain data consistency in a microservice architecture without having to use distributed transactions. You define a saga for each system command that needs to update data in multiple services. A saga is a sequence of local transactions. Each local transaction updates data within a single service using the familiar ACID transaction frameworks and libraries mentioned earlier.

图案:佐贺县

使用一系列使用异步消息传递进行协调的本地事务来维护服务之间的数据一致性。 请参阅 http://microservices.io/patterns/data/saga.html

Maintain data consistency across services using a sequence of local transactions that are coordinated using asynchronous messaging. See http://microservices.io/patterns/data/saga.html.

系统操作启动 saga 的第一步。本地事务的完成会触发 下一个本地事务。稍后,在 4.2 节中,您将看到如何使用异步消息传递实现步骤的协调。异步的一个重要好处 消息传递 是它确保执行 Saga 的所有步骤,即使 Saga 的一个或多个参与者是 暂时不可用。

The system operation initiates the first step of the saga. The completion of a local transaction triggers the execution of the next local transaction. Later, in section 4.2, you’ll see how coordination of the steps is implemented using asynchronous messaging. An important benefit of asynchronous messaging is that it ensures that all the steps of a saga are executed, even if one or more of the saga’s participants is temporarily unavailable.

Sagas 与 ACID 事务在几个重要方面有所不同。正如我在 4.3 节中详细描述的那样,它们缺少 ACID 事务的隔离属性。此外,由于每个本地事务都会提交其更改,因此 saga 必须使用补偿事务回滚。本节稍后将详细讨论补偿事务。让我们 请看一个示例 Saga。

Sagas differ from ACID transactions in a couple of important ways. As I describe in detail in section 4.3, they lack the isolation property of ACID transactions. Also, because each local transaction commits its changes, a saga must be rolled back using compensating transactions. I talk more about compensating transactions later in this section. Let’s take a look at an example saga.

示例 saga:Create Order saga

本章中使用的示例 saga 是 ,如图 4.2 所示。使用此 saga 实现操作。saga 的第一个本地事务由创建订单的外部请求启动。这 其他 5 个 local 事务分别由前一个事务的完成触发。Create Order SagaOrder ServicecreateOrder()

The example saga used throughout this chapter is the Create Order Saga, which is shown in figure 4.2. The Order Service implements the createOrder() operation using this saga. The saga’s first local transaction is initiated by the external request to create an order. The other five local transactions are each triggered by completion of the previous one.

图 4.2.使用 saga 创建 saga。该操作由一个 saga 实现,该 saga 由多个服务中的本地事务组成。OrdercreateOrder()

此 saga 由以下本地事务组成:

This saga consists of the following local transactions:

  1. 订单服务在 state 中创建一个。OrderAPPROVAL_PENDING
  2. Order ServiceCreate an Order in an APPROVAL_PENDING state.
  3. 消费者服务验证消费者是否可以下订单。
  4. Consumer ServiceVerify that the consumer can place an order.
  5. 厨房服务验证订单详细信息并在 .TicketCREATE_PENDING
  6. Kitchen ServiceValidate order details and create a Ticket in the CREATE_PENDING.
  7. 会计服务授权消费者的信用卡。
  8. Accounting ServiceAuthorize consumer’s credit card.
  9. 厨房服务将 的状态更改为 。TicketAWAITING_ACCEPTANCE
  10. Kitchen ServiceChange the state of the Ticket to AWAITING_ACCEPTANCE.
  11. 订单服务将 的状态更改为 。OrderAPPROVED
  12. Order ServiceChange the state of the Order to APPROVED.

稍后,在 4.2 节中,我将介绍参与 saga 的服务如何使用异步消息传递进行通信。服务发布消息 当本地事务完成时。然后,此消息会触发 saga 的下一步。使用消息传递不仅可以确保 Saga 参与者是松散耦合的,它还保证 Saga 完成。这是因为如果 message is temporarily unavailable,则消息代理会缓冲消息,直到可以传送为止。

Later, in section 4.2, I describe how the services that participate in a saga communicate using asynchronous messaging. A service publishes a message when a local transaction completes. This message then triggers the next step in the saga. Not only does using messaging ensure the saga participants are loosely coupled, it also guarantees that a saga completes. That’s because if the recipient of a message is temporarily unavailable, the message broker buffers the message until it can be delivered.

从表面上看,传纪看起来很简单,但使用它们有一些挑战。一个挑战是缺乏隔离 在传纪之间。Section 4.3 描述了如何处理这个问题。另一个挑战是在发生错误时回滚更改。让我们来看看 如何做到这一点。

On the surface, sagas seem straightforward, but there are a few challenges to using them. One challenge is the lack of isolation between sagas. Section 4.3 describes how to handle this problem. Another challenge is rolling back changes when an error occurs. Let’s take a look at how to do that.

Sagas 使用补偿事务来回滚更改

传统 ACID 事务的一大特点是,如果业务逻辑检测到 违反业务规则。它执行一个语句,数据库撤消到目前为止所做的所有更改。不幸的是,传纪不能自动回滚。 因为每个步骤都会将其更改提交到本地数据库。这意味着,例如,如果信用的授权 卡在第四步中失败,则 FTGO 应用程序必须显式撤消前三个步骤所做的更改。您必须编写所谓的补偿事务ROLLBACKCreate Order Saga

A great feature of traditional ACID transactions is that the business logic can easily roll back a transaction if it detects the violation of a business rule. It executes a ROLLBACK statement, and the database undoes all the changes made so far. Unfortunately, sagas can’t be automatically rolled back, because each step commits its changes to the local database. This means, for example, that if the authorization of the credit card fails in the fourth step of the Create Order Saga, the FTGO application must explicitly undo the changes made by the first three steps. You must write what are known as compensating transactions.

假设 (n + 1)thsaga 的事务失败。必须撤消前 n 个事务的影响。从概念上讲,这些步骤中的每一个 T具有相应的补偿事务 C,这会撤消 T.要撤消前 n 个步骤的效果,saga 必须执行每个 C以相反的顺序。步骤顺序为 T1...Tn、 Cn...C1,如图 4.3 所示。在此示例中,Tn+1失败,这需要步骤 T1...Tn以撤消。

Suppose that the (n + 1)th transaction of a saga fails. The effects of the previous n transactions must be undone. Conceptually, each of those steps, Ti, has a corresponding compensating transaction, Ci, which undoes the effects of the Ti. To undo the effects of those first n steps, the saga must execute each Ci in reverse order. The sequence of steps is T1 ... Tn, Cn ... C1, as shown in figure 4.3. In this example, Tn+1 fails, which requires steps T1 ... Tn to be undone.

图 4.3.当 saga 的某个步骤因违反业务规则而失败时,该 saga 必须显式撤消上一个 步骤。

saga 按正向事务的相反顺序执行补偿事务:Cn...C1.对 C 进行排序的机制与对 T 进行测序没有任何区别s.C 语言的完成必须触发 C 的执行I-1 号.

The saga executes the compensation transactions in reverse order of the forward transactions: Cn ... C1. The mechanics of sequencing the Cis aren’t any different than sequencing the Tis. The completion of Ci must trigger the execution of Ci-1.

例如,考虑一下 .此 saga 可能由于多种原因而失败:Create Order Saga

Consider, for example, the Create Order Saga. This saga can fail for a variety of reasons:

  • 消费者信息无效或不允许消费者创建订单。
  • The consumer information is invalid or the consumer isn’t allowed to create orders.
  • 餐厅信息无效或餐厅无法接受订单。
  • The restaurant information is invalid or the restaurant is unable to accept orders.
  • 消费者的信用卡授权失败。
  • The authorization of the consumer’s credit card fails.

如果本地事务失败,则 saga 的协调机制必须执行拒绝 和 的补偿事务。表 4.1 显示了 .请务必注意,并非所有步骤都需要补偿事务。只读步骤(如 )不需要补偿事务。诸如此类的步骤之后的步骤也总是成功的。OrderTicketCreate Order SagaverifyConsumerDetails()authorizeCreditCard()

If a local transaction fails, the saga’s coordination mechanism must execute compensating transactions that reject the Order and possibly the Ticket. Table 4.1 shows the compensating transactions for each step of the Create Order Saga. It’s important to note that not all steps need compensating transactions. Read-only steps, such as verifyConsumerDetails(), don’t need compensating transactions. Nor do steps such as authorizeCreditCard() that are followed by steps that always succeed.

表 4.1.补偿事务Create Order Saga

Step

服务

Service

交易

Transaction

补偿交易

Compensating transaction

1 订购服务 createOrder() rejectOrder()
2 消费者服务 verifyConsumerDetails()
3 厨房服务 createTicket() rejectTicket()
4 会计服务 authorizeCreditCard()
5 厨房服务 approveTicket()
6 订购服务 approveOrder()

第 4.3 节讨论了 的前三个步骤如何被称为可补偿事务,因为它们后面跟着可能失败的步骤,第四步如何被称为 saga 的透视事务,因为它后面跟着永远不会失败的步骤,以及最后两个步骤如何被称为可重试事务,因为它们总是成功的。Create Order Saga

Section 4.3 discusses how the first three steps of the Create Order Saga are termed compensatable transactions because they’re followed by steps that can fail, how the fourth step is termed the saga’s pivot transaction because it’s followed by steps that never fail, and how the last two steps are termed retriable transactions because they always succeed.

要了解如何使用补偿交易,请设想消费者的信用卡授权失败的场景。 在此方案中,saga 执行以下本地事务:

To see how compensating transactions are used, imagine a scenario where the authorization of the consumer’s credit card fails. In this scenario, the saga executes the following local transactions:

  1. 订单服务在 state 中创建一个。OrderAPPROVAL_PENDING
  2. Order ServiceCreate an Order in an APPROVAL_PENDING state.
  3. 消费者服务验证消费者是否可以下订单。
  4. Consumer ServiceVerify that the consumer can place an order.
  5. 厨房服务验证订单详细信息并在 state 中创建一个。TicketCREATE_PENDING
  6. Kitchen ServiceValidate order details and create a Ticket in the CREATE_PENDING state.
  7. 会计服务授权消费者的信用卡,失败。
  8. Accounting ServiceAuthorize consumer’s credit card, which fails.
  9. 厨房服务将 的状态更改为 。TicketCREATE_REJECTED
  10. Kitchen ServiceChange the state of the Ticket to CREATE_REJECTED.
  11. 订单服务将 的状态更改为 。OrderREJECTED
  12. Order ServiceChange the state of the Order to REJECTED.

第五步和第六步是补偿分别撤消 和 所做的更新的事务。saga 的协调逻辑负责对正向事务和补偿事务的执行进行排序。 让我们看看它是如何运作的。Kitchen ServiceOrder Service

The fifth and sixth steps are compensating transactions that undo the updates made by Kitchen Service and Order Service, respectively. A saga’s coordination logic is responsible for sequencing the execution of forward and compensating transactions. Let’s look at how that works.

4.2. 协调 saga

4.2. Coordinating sagas

saga 的实现由协调 saga 步骤的逻辑组成。当 saga 由系统命令启动时, 协调逻辑必须选择并告知第一个 Saga 参与者执行本地事务。一旦该交易 完成,则 Saga 的 Sequencing Coordination 会选择并调用下一个 Saga 参与者。这个过程一直持续到 Saga 已执行所有步骤。如果任何本地事务失败,则 saga 必须在 倒序。有几种不同的方法可以构建 saga 的协调逻辑:

A saga’s implementation consists of logic that coordinates the steps of the saga. When a saga is initiated by system command, the coordination logic must select and tell the first saga participant to execute a local transaction. Once that transaction completes, the saga’s sequencing coordination selects and invokes the next saga participant. This process continues until the saga has executed all the steps. If any local transaction fails, the saga must execute the compensating transactions in reverse order. There are a couple of different ways to structure a saga’s coordination logic:

  • 编舞在 saga 参与者之间分配决策和排序。他们主要通过交换事件进行通信。
  • ChoreographyDistribute the decision making and sequencing among the saga participants. They primarily communicate by exchanging events.
  • 编排将 saga 的协调逻辑集中在 saga 编排器类中。saga 业务流程协调程序向 saga 参与者发送命令消息,告诉他们要执行哪些操作。
  • OrchestrationCentralize a saga’s coordination logic in a saga orchestrator class. A saga orchestrator sends command messages to saga participants telling them which operations to perform.

让我们看看每个选项,从 choreography 开始。

Let’s look at each option, starting with choreography.

4.2.1. 基于 Choreography 的 Saga

4.2.1. Choreography-based sagas

实现 saga 的一种方法是使用 choreography。使用 Choreography 时,没有中央协调者告诉 The Saga 参与者 What to do.相反,saga 参与者订阅彼此的事件并做出相应的响应。 为了说明基于 Choreography 的 Sagas 是如何工作的,我首先将描述一个示例。之后,我将讨论几个设计问题 你必须解决。然后,我将讨论使用 Choreography 的优缺点。

One way you can implement a saga is by using choreography. When using choreography, there’s no central coordinator telling the saga participants what to do. Instead, the saga participants subscribe to each other’s events and respond accordingly. To show how choreography-based sagas work, I’ll first describe an example. After that, I’ll discuss a couple of design issues that you must address. Then I’ll discuss the benefits and drawbacks of using choreography.

使用 Choreography 实现 Create Order 传奇

图 4.4 显示了基于 Choreography 的 .参与者通过交换事件进行通信。每个参与者(从 开始)都会更新其数据库并发布触发下一个参与者的事件。Create Order SagaOrder Service

Figure 4.4 shows the design of the choreography-based version of the Create Order Saga. The participants communicate by exchanging events. Each participant, starting with the Order Service, updates its database and publishes an event that triggers the next participant.

图 4.4.实现 using 编排。saga 参与者通过交换事件进行通信。Create Order Saga

这个传奇的快乐之路如下:

The happy path through this saga is as follows:

  1. Order Service在 state 中创建一个 并发布一个事件。OrderAPPROVAL_PENDINGOrderCreated
  2. Order Service creates an Order in the APPROVAL_PENDING state and publishes an OrderCreated event.
  3. Consumer Service消费事件,验证消费者是否可以下单,并发布事件。OrderCreatedConsumerVerified
  4. Consumer Service consumes the OrderCreated event, verifies that the consumer can place the order, and publishes a ConsumerVerified event.
  5. Kitchen Service使用事件,验证 ,在 state 中创建 ,然后发布事件。OrderCreatedOrderTicketCREATE_PENDINGTicketCreated
  6. Kitchen Service consumes the OrderCreated event, validates the Order, creates a Ticket in a CREATE_PENDING state, and publishes the TicketCreated event.
  7. Accounting Service使用该事件并创建一个 in 状态。OrderCreatedCreditCardAuthorizationPENDING
  8. Accounting Service consumes the OrderCreated event and creates a CreditCardAuthorization in a PENDING state.
  9. Accounting Service使用 and 事件,从使用者的信用卡中扣款,并发布事件。TicketCreatedConsumerVerifiedCreditCardAuthorized
  10. Accounting Service consumes the TicketCreated and ConsumerVerified events, charges the consumer’s credit card, and publishes the CreditCardAuthorized event.
  11. Kitchen Service使用事件并将 的状态更改为 。CreditCardAuthorizedTicketAWAITING_ACCEPTANCE
  12. Kitchen Service consumes the CreditCardAuthorized event and changes the state of the Ticket to AWAITING_ACCEPTANCE.
  13. Order Service接收事件,更改 to 的状态,并发布事件。CreditCardAuthorizedOrderAPPROVEDOrderApproved
  14. Order Service receives the CreditCardAuthorized events, changes the state of the Order to APPROVED, and publishes an OrderApproved event.

还必须处理 saga 参与者拒绝并发布某种失败事件的情况。例如,消费者的信用卡授权可能会失败。传奇 必须执行补偿事务以撤消已执行的操作。图 4.5 显示了当 can's can't authorize the credit card's credit (无法授权使用者的信用卡) 时的事件流。Create Order SagaOrderAccountingService

The Create Order Saga must also handle the scenario where a saga participant rejects the Order and publishes some kind of failure event. For example, the authorization of the consumer’s credit card might fail. The saga must execute the compensating transactions to undo what’s already been done. Figure 4.5 shows the flow of events when the AccountingService can’t authorize the consumer’s credit card.

图 4.5.当消费者的信用卡授权失败时,事件序列。 发布事件,这会导致拒绝 ,并拒绝 。Create Order SagaAccounting ServiceCredit Card Authorization FailedKitchen ServiceTicketOrder ServiceOrder

事件顺序如下:

The sequence of events is as follows:

  1. Order Service在 state 中创建一个 并发布一个事件。OrderAPPROVAL_PENDINGOrderCreated
  2. Order Service creates an Order in the APPROVAL_PENDING state and publishes an OrderCreated event.
  3. Consumer Service消费事件,验证消费者是否可以下单,并发布事件。OrderCreatedConsumerVerified
  4. Consumer Service consumes the OrderCreated event, verifies that the consumer can place the order, and publishes a ConsumerVerified event.
  5. Kitchen Service使用事件,验证 ,在 state 中创建 ,然后发布事件。OrderCreatedOrderTicketCREATE_PENDINGTicketCreated
  6. Kitchen Service consumes the OrderCreated event, validates the Order, creates a Ticket in a CREATE_PENDING state, and publishes the TicketCreated event.
  7. Accounting Service使用该事件并创建一个 in 状态。OrderCreatedCreditCardAuthorizationPENDING
  8. Accounting Service consumes the OrderCreated event and creates a CreditCardAuthorization in a PENDING state.
  9. Accounting Service使用 and 事件,向使用者的信用卡收费,并发布事件。TicketCreatedConsumerVerifiedCredit Card Authorization Failed
  10. Accounting Service consumes the TicketCreated and ConsumerVerified events, charges the consumer’s credit card, and publishes a Credit Card Authorization Failed event.
  11. Kitchen Service使用事件并将 的状态更改为 。Credit Card Authorization FailedTicketREJECTED
  12. Kitchen Service consumes the Credit Card Authorization Failed event and changes the state of the Ticket to REJECTED.
  13. Order Service使用事件并将 的状态更改为 。Credit Card Authorization FailedOrderREJECTED
  14. Order Service consumes the Credit Card Authorization Failed event and changes the state of the Order to REJECTED.

如您所见,基于 编排的 Sagas 的参与者使用 publish/subscribe 进行交互。让我们仔细看看 在为 Sagas 实施基于发布/订阅的通信时需要考虑的一些问题。

As you can see, the participants of choreography-based sagas interact using publish/subscribe. Let’s take a closer look at some issues you’ll need to consider when implementing publish/subscribe-based communication for your sagas.

可靠的基于事件的通信

在实施基于 Choreography 的 传说。第一个问题是确保 saga 参与者更新其数据库并将事件作为数据库的一部分发布 交易。基于 编排的 saga 的每个步骤都会更新数据库并发布事件。例如,在 中,接收事件,创建 ,然后发布事件。数据库更新和事件发布必须以原子方式进行。因此,为了通信 可靠地,Saga 参与者必须使用第 3 章中描述的事务型消息传递。Create Order SagaKitchen ServiceConsumer VerifiedTicketTicket Created

There are a couple of interservice communication-related issues that you must consider when implementing choreography-based sagas. The first issue is ensuring that a saga participant updates its database and publishes an event as part of a database transaction. Each step of a choreography-based saga updates the database and publishes an event. For example, in the Create Order Saga, Kitchen Service receives a Consumer Verified event, creates a Ticket, and publishes a Ticket Created event. It’s essential that the database update and the publishing of the event happen atomically. Consequently, to communicate reliably, the saga participants must use transactional messaging, described in chapter 3.

您需要考虑的第二个问题是确保 saga 参与者必须能够映射它收到的每个事件 添加到自己的数据中。例如,当收到事件时,它必须能够查找相应的解决方案是让 saga 参与者发布包含相关 ID 的事件,该 ID 是允许其他参与者执行映射的数据。Order ServiceCredit Card AuthorizedOrder.

The second issue you need to consider is ensuring that a saga participant must be able to map each event that it receives to its own data. For example, when Order Service receives a Credit Card Authorized event, it must be able to look up the corresponding Order. The solution is for a saga participant to publish events containing a correlation id, which is data that enables other participants to perform the mapping.

例如,的参与者可以将 用作从一个参与者传递到下一个参与者的相关 ID。 发布包含 from the event 的事件。当收到事件时,它使用 来检索相应的 .同样,使用 from that 事件检索相应的 .Create Order SagaorderIdAccounting ServiceCredit Card AuthorizedorderIdTicketCreatedOrder ServiceCredit Card AuthorizedorderIdOrderKitchen ServiceorderIdTicket

For example, the participants of the Create Order Saga can use the orderId as a correlation ID that’s passed from one participant to the next. Accounting Service publishes a Credit Card Authorized event containing the orderId from the TicketCreated event. When Order Service receives a Credit Card Authorized event, it uses the orderId to retrieve the corresponding Order. Similarly, Kitchen Service uses the orderId from that event to retrieve the corresponding Ticket.

基于 Choreography 的 Saga 的优缺点

基于 Choreography 的 Sagas 具有以下几个好处:

Choreography-based sagas have several benefits:

  • 简单性服务在创建、更新或删除业务对象时发布事件。
  • SimplicityServices publish events when they create, update, or delete business objects.
  • 松耦合参与者订阅事件,彼此之间没有直接的了解。
  • Loose couplingThe participants subscribe to events and don’t have direct knowledge of each other.

但也有一些缺点:

And there are some drawbacks:

  • 更难理解的是—— 与编排不同,代码中没有定义 saga 的单个位置。相反,编排将 服务中 Saga 的实现。因此,开发人员有时很难理解 给定的 saga 有效。
  • More difficult to understandUnlike with orchestration, there isn’t a single place in the code that defines the saga. Instead, choreography distributes the implementation of the saga among the services. Consequently, it’s sometimes difficult for a developer to understand how a given saga works.
  • 服务之间的循环依赖关系saga 参与者订阅彼此的事件,这通常会产生循环依赖关系。例如,如果您仔细 查看图 4.4,您将看到存在循环依赖关系,例如 → → 。虽然这不一定是问题,但循环依赖关系被认为是一种设计味道。Order ServiceAccounting ServiceOrder Service
  • Cyclic dependencies between the servicesThe saga participants subscribe to each other’s events, which often creates cyclic dependencies. For example, if you carefully examine figure 4.4, you’ll see that there are cyclic dependencies, such as Order ServiceAccounting ServiceOrder Service. Although this isn’t necessarily a problem, cyclic dependencies are considered a design smell.
  • 紧密耦合的风险每个 saga 参与者都需要订阅影响他们的所有事件。例如,必须订阅导致使用者的信用卡被扣款或退款的所有事件。因此,存在风险 它需要与 .Accounting ServiceOrder Service
  • Risk of tight couplingEach saga participant needs to subscribe to all events that affect them. For example, Accounting Service must subscribe to all events that cause the consumer’s credit card to be charged or refunded. As a result, there’s a risk that it would need to be updated in lockstep with the order lifecycle implemented by Order Service.

编排可以很好地用于简单的 saga,但由于这些缺点,通常更适合使用更复杂的 saga 配器。让我们看看编排的工作原理。

Choreography can work well for simple sagas, but because of these drawbacks it’s often better for more complex sagas to use orchestration. Let’s look at how orchestration works.

4.2.2. 基于编排的 Sagas

4.2.2. Orchestration-based sagas

编排是实现 Sagas 的另一种方法。使用业务流程时,您可以定义一个业务流程协调程序类,该类的唯一责任是 是告诉 Saga 参与者该怎么做。saga 编排器使用 command/async 与参与者通信 reply-样式的交互。要执行 saga 步骤,它会向参与者发送一条命令消息,告诉它要执行什么操作 执行。在 saga 参与者执行完操作后,它会向编排器发送回复消息。编排器 然后处理消息并确定下一步要执行的 saga 步骤。

Orchestration is another way to implement sagas. When using orchestration, you define an orchestrator class whose sole responsibility is to tell the saga participants what to do. The saga orchestrator communicates with the participants using command/async reply-style interaction. To execute a saga step, it sends a command message to a participant telling it what operation to perform. After the saga participant has performed the operation, it sends a reply message to the orchestrator. The orchestrator then processes the message and determines which saga step to perform next.

为了展示基于编排的 Sagas 的工作原理,我首先要描述一个示例。然后,我将介绍如何对基于编排的 saga 作为状态机。我将讨论如何利用事务型消息传递来确保 Saga Orchestrator 和 Saga 参与者。然后,我将介绍使用基于编排的优点和缺点 传说。

To show how orchestration-based sagas work, I’ll first describe an example. Then I’ll describe how to model orchestration-based sagas as state machines. I’ll discuss how to make use of transactional messaging to ensure reliable communication between the saga orchestrator and the saga participants. I’ll then describe the benefits and drawbacks of using orchestration-based sagas.

使用编排实现 Create Order 传奇

图 4.6 显示了基于编排的 .saga 由类编排,该类使用异步请求/响应调用 saga 参与者。此类跟踪进程和 向 Saga 参与者发送命令消息,例如 和 。该类从其回复通道中读取回复消息,然后确定 saga 中的下一步(如果有)。Create Order SagaCreateOrderSagaKitchen ServiceConsumer ServiceCreateOrderSaga

Figure 4.6 shows the design of the orchestration-based version of the Create Order Saga. The saga is orchestrated by the CreateOrderSaga class, which invokes the saga participants using asynchronous request/response. This class keeps track of the process and sends command messages to saga participants, such as Kitchen Service and Consumer Service. The CreateOrderSaga class reads reply messages from its reply channel and then determines the next step, if any, in the saga.

图 4.6.实现 using 编排。 实施 Saga 编排器,该编排器使用异步请求/响应调用 Saga 参与者。Create Order SagaOrder Service

Order Service首先创建一个 AND 一个 Orchestrator。之后,快乐路径的流程如下:OrderCreate Order Saga

Order Service first creates an Order and a Create Order Saga orchestrator. After that, the flow for the happy path is as follows:

  1. saga 编排器向 .Verify ConsumerConsumer Service
  2. The saga orchestrator sends a Verify Consumer command to Consumer Service.
  3. Consumer Service回复一条消息。Consumer Verified
  4. Consumer Service replies with a Consumer Verified message.
  5. saga 编排器向 .Create TicketKitchen Service
  6. The saga orchestrator sends a Create Ticket command to Kitchen Service.
  7. Kitchen Service回复一条消息。Ticket Created
  8. Kitchen Service replies with a Ticket Created message.
  9. saga 编排器向 发送一条消息。Authorize CardAccounting Service
  10. The saga orchestrator sends an Authorize Card message to Accounting Service.
  11. Accounting Service回复一条消息。Card Authorized
  12. Accounting Service replies with a Card Authorized message.
  13. saga 编排器向 .Approve TicketKitchen Service
  14. The saga orchestrator sends an Approve Ticket command to Kitchen Service.
  15. saga 编排器向 .Approve OrderOrder Service
  16. The saga orchestrator sends an Approve Order command to Order Service.

请注意,在最后一步中,saga 编排器会向 发送命令消息,即使它是 的组成部分。原则上,可以通过直接更新来批准 。但为了保持一致,这个传奇只被视为另一个参与者。Order ServiceOrder ServiceCreate Order SagaOrderOrder Service

Note that in final step, the saga orchestrator sends a command message to Order Service, even though it’s a component of Order Service. In principle, the Create Order Saga could approve the Order by updating it directly. But in order to be consistent, the saga treats Order Service as just another participant.

图 4.6 等图表分别描述了一个 saga 的一个场景,但一个 saga 可能有许多场景。例如,有四个场景。除了 happy path 之外,saga 还可能由于 、 或 中的失败而失败。因此,将 saga 建模为状态机非常有用,因为它描述了所有可能的场景。Create Order SagaConsumer ServiceKitchen ServiceAccounting Service

Diagrams such as figure 4.6 each depict one scenario for a saga, but a saga is likely to have numerous scenarios. For example, the Create Order Saga has four scenarios. In addition to the happy path, the saga can fail due to a failure in either Consumer Service, Kitchen Service, or Accounting Service. It’s useful, therefore, to model a saga as a state machine, because it describes all possible scenarios.

将 saga 编排器建模为状态机

对 saga 编排器进行建模的一个好方法是作为状态机。状态机由一组状态和一组由事件触发的状态之间的转换组成。每个过渡可以具有 一个操作,对于 saga 是 saga 参与者的调用。状态之间的转换由完成触发 由 Saga 参与者执行的本地事务。本地事务的当前状态和具体结果 确定状态转换以及要执行的操作(如果有)。状态也有有效的测试策略 机器。因此,使用状态机模型可以更轻松地设计、实现和测试 Sagas。

A good way to model a saga orchestrator is as a state machine. A state machine consists of a set of states and a set of transitions between states that are triggered by events. Each transition can have an action, which for a saga is the invocation of a saga participant. The transitions between states are triggered by the completion of a local transaction performed by a saga participant. The current state and the specific outcome of the local transaction determine the state transition and what action, if any, to perform. There are also effective testing strategies for state machines. As a result, using a state machine model makes designing, implementing, and testing sagas easier.

图 4.7 显示了 的状态机模型。此状态机由许多状态组成,包括:Create Order Saga

Figure 4.7 shows the state machine model for the Create Order Saga. This state machine consists of numerous states, including the following:

  • 验证消费者初始状态。当处于此状态时,saga 正在等待 验证消费者是否可以下订单。Consumer Service
  • Verifying ConsumerThe initial state. When in this state, the saga is waiting for the Consumer Service to verify that the consumer can place the order.
  • 创建票证 - saga 正在等待对命令的回复。Create Ticket
  • Creating TicketThe saga is waiting for a reply to the Create Ticket command.
  • 授权卡等待授权消费者的信用卡。Accounting Service
  • Authorizing CardWaiting for Accounting Service to authorize the consumer’s credit card.
  • 订单已批准 - 指示 saga 已成功完成的最终状态。
  • Order ApprovedA final state indicating that the saga completed successfully.
  • 订单被拒绝一个 final 状态,指示 被其中一个参与者拒绝。Order
  • Order RejectedA final state indicating that the Order was rejected by one of the participants.

图 4.7.Create Order Saga

状态机还定义了许多状态转换。例如,状态机从 state 转换为 the 或 state。当它收到对命令的成功回复时,它会转换到该状态。或者,如果无法创建 ,则状态机将转换为该状态。Creating TicketAuthorizing CardRejected OrderAuthorizing CardCreate TicketKitchen ServiceTicketRejected Order

The state machine also defines numerous state transitions. For example, the state machine transitions from the Creating Ticket state to either the Authorizing Card or the Rejected Order state. It transitions to the Authorizing Card state when it receives a successful reply to the Create Ticket command. Alternatively, if Kitchen Service couldn’t create the Ticket, the state machine transitions to the Rejected Order state.

状态机的初始操作是将命令发送到 。来自 的响应会触发下一个状态转换。如果使用者已成功验证,则 saga 将创建 并转换为 状态。但是,如果使用者验证失败,则 saga 将拒绝 并转换为 state.状态机会经历许多其他状态转换,由 saga 参与者的响应驱动,直到 它达到 OR 的最终状态。VerifyConsumerConsumer ServiceConsumer ServiceTicketCreating TicketOrderRejecting OrderOrder ApprovedOrder Rejected

The state machine’s initial action is to send the VerifyConsumer command to Consumer Service. The response from Consumer Service triggers the next state transition. If the consumer was successfully verified, the saga creates the Ticket and transitions to the Creating Ticket state. But if the consumer verification failed, the saga rejects the Order and transitions to the Rejecting Order state. The state machine undergoes numerous other state transitions, driven by the responses from saga participants, until it reaches a final state of either Order Approved or Order Rejected.

Saga 编排和事务型消息传递

基于编排的 saga 的每个步骤都包括一个服务、更新数据库和发布消息。例如,persist an 和 a orchestrator 并向第一个 saga 参与者发送消息。saga 参与者(如 )通过更新其数据库并发送回复消息来处理命令消息。 通过更新 Saga Orchestrator 的状态并将命令消息发送到 下一个 Saga 参与者。如第 3 章所述,服务必须使用事务型消息传递才能以原子方式更新数据库和发布消息。稍后在 4.4 节中,我将更详细地描述 orchestrator 的实现,包括它如何使用事务消息传递。Order ServiceOrderCreate Order SagaKitchen ServiceOrder ServiceCreate Order Saga

Each step of an orchestration-based saga consists of a service updating a database and publishing a message. For example, Order Service persists an Order and a Create Order Saga orchestrator and sends a message to the first saga participant. A saga participant, such as Kitchen Service, handles a command message by updating its database and sending a reply message. Order Service processes the participant’s reply message by updating the state of the saga orchestrator and sending a command message to the next saga participant. As described in chapter 3, a service must use transactional messaging in order to atomically update the database and publish messages. Later on in section 4.4, I’ll describe the implementation of the Create Order Saga orchestrator in more detail, including how it uses transaction messaging.

让我们看一下使用 saga 编排的优点和缺点。

Let’s take a look at the benefits and drawbacks of using saga orchestration.

基于编排的 Sagas 的优点和缺点

基于编排的 Sagas 具有以下几个好处:

Orchestration-based sagas have several benefits:

  • 更简单的依赖项编排的一个好处是它不会引入循环依赖项。saga 编排器调用 saga 参与者 但参与者不会调用 Orchestrator。因此,编排器依赖于参与者,而不是参与者 反之亦然,因此没有循环依赖关系。
  • Simpler dependenciesOne benefit of orchestration is that it doesn’t introduce cyclic dependencies. The saga orchestrator invokes the saga participants, but the participants don’t invoke the orchestrator. As a result, the orchestrator depends on the participants but not vice versa, and so there are no cyclic dependencies.
  • 较少耦合每个服务都实现一个由编排器调用的 API,因此它不需要知道发布的事件 由 Saga 参与者提供。
  • Less couplingEach service implements an API that is invoked by the orchestrator, so it does not need to know about the events published by the saga participants.
  • 改进关注点分离并简化业务逻辑saga 协调逻辑在 saga 编排器中本地化。域对象更简单,并且不知道 他们参与的 saga。例如,当使用编排时,该类不知道任何 saga,因此它具有更简单的状态机模型。在执行期间,它会直接从 state 转换到 state。该类没有任何与 saga 的步骤对应的中间状态。因此,业务要简单得多。OrderCreate Order SagaAPPROVAL_PENDINGAPPROVEDOrder
  • Improves separation of concerns and simplifies the business logicThe saga coordination logic is localized in the saga orchestrator. The domain objects are simpler and have no knowledge of the sagas that they participate in. For example, when using orchestration, the Order class has no knowledge of any of the sagas, so it has a simpler state machine model. During the execution of the Create Order Saga, it transitions directly from the APPROVAL_PENDING state to the APPROVED state. The Order class doesn’t have any intermediate states corresponding to the steps of the saga. As a result, the business is much simpler.

编排还有一个缺点:在 Orchestrator 中集中过多业务逻辑的风险。这会导致 设计 Smart Orchestrator 告诉 Dumb 服务要执行哪些操作的位置。幸运的是,您可以避免此问题 通过设计仅负责排序且不包含任何其他业务逻辑的编排器。

Orchestration also has a drawback: the risk of centralizing too much business logic in the orchestrator. This results in a design where the smart orchestrator tells the dumb services what operations to do. Fortunately, you can avoid this problem by designing orchestrators that are solely responsible for sequencing and don’t contain any other business logic.

我建议对除最简单的 Saga 之外的所有 Saga 使用编排。为您的 Saga 实现协调逻辑只是 您需要解决的设计问题之一。另一个,这可能是您在使用 Sagas 正在处理缺乏隔离的问题。让我们来看看这个问题以及如何解决它。

I recommend using orchestration for all but the simplest sagas. Implementing the coordination logic for your sagas is just one of the design problems you need to solve. Another, which is perhaps the biggest challenge that you’ll face when using sagas, is handling the lack of isolation. Let’s take a look at that problem and how to solve it.

4.3. 处理 isolation 的缺失

4.3. Handling the lack of isolation

ACID 中的 I 代表隔离。ACID 事务的隔离属性确保同时执行多个事务的结果是 就像它们按某种顺序执行一样。数据库提供了每个 ACID 事务都有的错觉 对数据的独占访问权限。隔离使得编写并发执行的业务逻辑变得更加容易。

The I in ACID stands for isolation. The isolation property of ACID transactions ensures that the outcome of executing multiple transactions concurrently is the same as if they were executed in some serial order. The database provides the illusion that each ACID transaction has exclusive access to the data. Isolation makes it a lot easier to write business logic that executes concurrently.

使用 saga 的挑战在于它们缺乏 ACID 事务的隔离属性。那是因为所做的更新 一旦该事务提交,每个 saga 的本地事务都会立即对其他 saga 可见。此行为 可能会导致两个问题。首先,其他 saga 可以在 saga 执行时更改 saga 访问的数据。和其他传奇 可以在 Saga 完成更新之前读取其数据,因此可能会接触到不一致的数据。您可以 事实上,将 saga 视为 ACD:

The challenge with using sagas is that they lack the isolation property of ACID transactions. That’s because the updates made by each of a saga’s local transactions are immediately visible to other sagas once that transaction commits. This behavior can cause two problems. First, other sagas can change the data accessed by the saga while it’s executing. And other sagas can read its data before the saga has completed its updates, and consequently can be exposed to inconsistent data. You can, in fact, consider a saga to be ACD:

  • 原子性saga 实现可确保执行所有事务或撤消所有更改。
  • AtomicityThe saga implementation ensures that all transactions are executed or all changes are undone.
  • 一致性服务中的引用完整性由本地数据库处理。跨服务的引用完整性由 服务。
  • ConsistencyReferential integrity within a service is handled by local databases. Referential integrity across services is handled by the services.
  • 耐用性由本地数据库处理。
  • DurabilityHandled by local databases.

这种缺乏隔离可能会导致数据库文献中所谓的异常。异常是指事务以一次执行一个事务时不会读取或写入数据的方式。 发生异常时,并发执行 saga 的结果与串行执行的结果不同。

This lack of isolation potentially causes what the database literature calls anomalies. An anomaly is when a transaction reads or writes data in a way that it wouldn’t if transactions were executed one at time. When an anomaly occurs, the outcome of executing sagas concurrently is different than if they were executed serially.

从表面上看,缺乏隔离听起来是行不通的。但在实践中,开发人员通常会接受减少的隔离 以换取更高的性能。RDBMS 允许您为每个事务指定隔离级别 (https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html)。默认隔离级别通常是弱于完全隔离(也称为可序列化)的隔离级别 交易。实际的数据库事务通常与教科书上对 ACID 事务的定义不同。

On the surface, the lack of isolation sounds unworkable. But in practice, it’s common for developers to accept reduced isolation in return for higher performance. An RDBMS lets you specify the isolation level for each transaction (https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html). The default isolation level is usually an isolation level that’s weaker than full isolation, also known as serializable transactions. Real-world database transactions are often different from textbook definitions of ACID transactions.

下一节将讨论一组处理缺乏隔离的 saga 设计策略。这些策略是已知的 作为对策。一些对策在应用程序级别实施隔离。其他对策可降低 缺乏孤立。通过使用对策,您可以编写基于 saga 且正常工作的业务逻辑。

The next section discusses a set of saga design strategies that deal with the lack of isolation. These strategies are known as countermeasures. Some countermeasures implement isolation at the application level. Other countermeasures reduce the business risk of the lack of isolation. By using countermeasures, you can write saga-based business logic that works correctly.

在本节开始时,我将介绍由于缺乏隔离而导致的异常。之后,我将讨论对策 要么消除这些异常,要么降低他们的业务风险。

I’ll begin the section by describing the anomalies that are caused by the lack of isolation. After that, I’ll talk about countermeasures that either eliminate those anomalies or reduce their business risk.

4.3.1. 异常概述

4.3.1. Overview of anomalies

缺乏隔离可能会导致以下三种异常:

The lack of isolation can cause the following three anomalies:

  • 丢失的更新一个 saga 覆盖而不读取另一个 saga 所做的更改。
  • Lost updatesOne saga overwrites without reading changes made by another saga.
  • 脏读数事务或 saga 读取尚未完成这些更新的 saga 所做的更新。
  • Dirty readsA transaction or a saga reads the updates made by a saga that has not yet completed those updates.
  • 模糊/不可重复的读取一个 saga 的两个不同步骤读取相同的数据并获得不同的结果,因为另一个 saga 进行了更新。
  • Fuzzy/nonrepeatable readsTwo different steps of a saga read the same data and get different results because another saga has made updates.

所有三种异常都可能发生,但前两种是最常见和最具挑战性的。让我们来看看这些 两种类型的异常,从丢失的更新开始。

All three anomalies can occur, but the first two are the most common and the most challenging. Let’s take a look at those two types of anomaly, starting with lost updates.

丢失的更新

当一个 saga 覆盖另一个 saga 所做的更新时,会发生丢失的更新异常。例如,请考虑以下 场景:

A lost update anomaly occurs when one saga overwrites an update made by another saga. Consider, for example, the following scenario:

  1. 的第一步创建一个 .Create Order SagaOrder
  2. The first step of the Create Order Saga creates an Order.
  3. 在执行该 saga 时,会取消 .Cancel Order SagaOrder
  4. While that saga is executing, the Cancel Order Saga cancels the Order.
  5. 最后一步是批准 .Create Order SagaOrder
  6. The final step of the Create Order Saga approves the Order.

在这种情况下,会忽略 所做的更新并覆盖它。因此,FTGO 应用程序将发货客户已取消的订单。在本节的后面, 我将展示如何防止丢失更新。Create Order SagaCancel Order Saga

In this scenario, the Create Order Saga ignores the update made by the Cancel Order Saga and overwrites it. As a result, the FTGO application will ship an order that the customer had cancelled. Later in this section, I’ll show how to prevent lost updates.

脏读

当一个 saga 读取正在由另一个 saga 更新的数据时,将发生脏读。例如,考虑一下 消费者具有信用额度的 FTGO 应用商店版本。在此应用程序中,取消订单的 saga 由以下事务组成:

A dirty read occurs when one saga reads data that’s in the middle of being updated by another saga. Consider, for example, a version of the FTGO application store where consumers have a credit limit. In this application, a saga that cancels an order consists of the following transactions:

  • 消费者服务增加可用积分。
  • Consumer ServiceIncrease the available credit.
  • 订单服务将 的状态更改为 已取消。Order
  • Order ServiceChange the state of the Order to cancelled.
  • 送货服务取消投放。
  • Delivery ServiceCancel the delivery.

让我们想象一个场景,其中交错执行 and s,并且 the 被回滚,因为为时已晚,无法取消投放。调用 如下所示:Cancel OrderCreate Order SagaCancel Order SagaConsumer Service

Let’s imagine a scenario that interleaves the execution of the Cancel Order and Create Order Sagas, and the Cancel Order Saga is rolled back because it’s too late to cancel the delivery. It’s possible that the sequence of transactions that invoke the Consumer Service is as follows:

  1. 取消订单传奇增加可用积分。
  2. Cancel Order SagaIncrease the available credit.
  3. 创建 Order Saga减少可用积分。
  4. Create Order SagaReduce the available credit.
  5. 取消订单传奇减少可用积分的补偿交易。
  6. Cancel Order SagaA compensating transaction that reduces the available credit.

在这种情况下,会对可用信用额度进行脏读取,使消费者能够下达超出其信用额度的订单。它 这对企业来说可能是一个不可接受的风险。Create Order Saga

In this scenario, the Create Order Saga does a dirty read of the available credit that enables the consumer to place an order that exceeds their credit limit. It’s likely that this is an unacceptable risk to the business.

让我们看看如何防止此类异常和其他类型的异常影响应用程序。

Let’s look at how to prevent this and other kinds of anomalies from impacting an application.

4.3.2. 处理缺乏隔离的对策

4.3.2. Countermeasures for handling the lack of isolation

saga 事务模型是 ACD,它缺乏隔离可能会导致异常,从而导致应用程序行为异常。 开发人员有责任以防止异常或最大限度地减少其影响的方式编写 Sagas 在业务上。这听起来可能是一项艰巨的任务,但您已经看到了防止异常的策略示例。 对状态(如 )的使用就是此类策略的一个示例。更新 (如 ) 的 Sagas 首先将 的状态设置为 。状态告诉其他事务 saga 正在更新 ,并采取相应的行动。Order*_PENDINGAPPROVAL_PENDINGOrdersCreate Order SagaOrder*_PENDING*_PENDINGOrder

The saga transaction model is ACD, and its lack of isolation can result in anomalies that cause applications to misbehave. It’s the responsibility of the developer to write sagas in a way that either prevents the anomalies or minimizes their impact on the business. This may sound like a daunting task, but you’ve already seen an example of a strategy that prevents anomalies. An Order’s use of *_PENDING states, such as APPROVAL_PENDING, is an example of one such strategy. Sagas that update Orders, such as the Create Order Saga, begin by setting the state of an Order to *_PENDING. The *_PENDING state tells other transactions that the Order is being updated by a saga and to act accordingly.

An 对状态的使用是 1998 年论文“使用远程过程调用和在多数据库中的语义 ACID 属性”的一个例子 update propagations“,作者 Lars Frank 和 Torben U. Zahle 称为语义锁对策https://dl.acm.org/citation.cfm?id=284472.284478)。本文介绍了如何处理不使用分布式的多数据库架构中缺乏事务隔离的问题 交易。它的许多想法在设计传纪时都很有用。它描述了一组处理异常的对策 由于缺乏隔离而引起的,无法防止一个或多个异常,或将它们对业务的影响降至最低。对策 本文描述如下:Order*_PENDING

An Order’s use of *_PENDING states is an example of what the 1998 paper “Semantic ACID properties in multidatabases using remote procedure calls and update propagations” by Lars Frank and Torben U. Zahle calls a semantic lock countermeasure (https://dl.acm.org/citation.cfm?id=284472.284478). The paper describes how to deal with the lack of transaction isolation in multi-database architectures that don’t use distributed transactions. Many of its ideas are useful when designing sagas. It describes a set of countermeasures for handling anomalies caused by lack of isolation that either prevent one or more anomalies or minimize their impact on the business. The countermeasures described by this paper are as follows:

  • 语义锁应用程序级锁。
  • Semantic lockAn application-level lock.
  • 交换更新将更新操作设计为可按任意顺序执行。
  • Commutative updatesDesign update operations to be executable in any order.
  • 悲观观点对 saga 的步骤重新排序以最大限度地降低业务风险。
  • Pessimistic viewReorder the steps of a saga to minimize business risk.
  • Reread value(重读值)- 通过重新读取数据来防止脏写,以在覆盖数据之前验证数据是否保持不变。
  • Reread valuePrevent dirty writes by rereading data to verify that it’s unchanged before overwriting it.
  • 版本文件 - 将更新记录记录下来,以便可以对其进行重新排序。
  • Version fileRecord the updates to a record so that they can be reordered.
  • 按值 - 使用每个请求的业务风险来动态选择并发机制。
  • By valueUse each request’s business risk to dynamically select the concurrency mechanism.

在本节的后面,我将逐一介绍这些对策,但首先我想介绍一些用于描述 在讨论对策时有用的 saga 的结构。

Later in this section, I describe each of these countermeasures, but first I want to introduce some terminology for describing the structure of a saga that’s useful when discussing countermeasures.

传奇的结构

上一节中提到的对策论文为 saga 的结构定义了一个有用的模型。在这个模型中, 如图 4.8 所示,一个 SAGA 由三种类型的事务组成:

The countermeasures paper mentioned in the last section defines a useful model for the structure of a saga. In this model, shown in figure 4.8, a saga consists of three types of transactions:

  • 可补偿交易可能使用补偿事务回滚的事务。
  • Compensatable transactionsTransactions that can potentially be rolled back using a compensating transaction.
  • Pivot 交易传奇中的通过/不通过点。如果 pivot 事务提交,则 saga 将运行直到完成。透视交易可以 是既不可补偿也不可重试的事务。或者,它可以是最后一个可补偿交易或 第一个可重试事务。
  • Pivot transactionThe go/no-go point in a saga. If the pivot transaction commits, the saga will run until completion. A pivot transaction can be a transaction that’s neither compensatable nor retriable. Alternatively, it can be the last compensatable transaction or the first retriable transaction.
  • Retriable transactions(可重试的事务) — 紧跟在 pivot 事务之后并保证成功的事务。
  • Retriable transactionsTransactions that follow the pivot transaction and are guaranteed to succeed.
图 4.8.一个 saga 由三种不同类型的事务组成:可补偿事务,这些事务可以回滚,因此具有 补偿事务、透视事务(即 Saga 的通过/不通过点)和可重试事务(即事务) 不需要回滚并保证完成。

在 、 和 步骤中是可补偿的事务。和 transactions 具有撤消其更新的补偿事务。该事务是只读的,因此不需要补偿事务。该事务是此 saga 的透视事务。如果消费者的信用卡可以授权,则此 saga 保证 完成。和 steps 是透视事务之后的可重试事务。Create Order SagacreateOrder()verifyConsumerDetails()createTicket()createOrder()createTicket()verifyConsumerDetails()authorizeCreditCard()approveTicket()approveOrder()

In the Create Order Saga, the createOrder(), verifyConsumerDetails(), and createTicket() steps are compensatable transactions. The createOrder() and createTicket() transactions have compensating transactions that undo their updates. The verifyConsumerDetails() transaction is read-only, so doesn’t need a compensating transaction. The authorizeCreditCard() transaction is this saga’s pivot transaction. If the consumer’s credit card can be authorized, this saga is guaranteed to complete. The approveTicket() and approveOrder() steps are retriable transactions that follow the pivot transaction.

可补偿事务和可重试事务之间的区别尤为重要。正如您将看到的,每个 类型的交易在对策中起着不同的作用。第 13 章指出,在迁移到微服务时,单体式应用有时必须参与 saga,并且它 如果 Monolith 只需要执行可重试的事务,则更简单。

The distinction between compensatable transactions and retriable transactions is especially important. As you’ll see, each type of transaction plays a different role in the countermeasures. Chapter 13 states that when migrating to microservices, the monolith must sometimes participate in sagas and that it’s significantly simpler if the monolith only ever needs to execute retriable transactions.

现在让我们看看每个对策,从语义锁对策开始。

Let’s now look at each countermeasure, starting with the semantic lock countermeasure.

对策:语义锁

使用语义锁对策时,saga 的可补偿事务会在它创建的任何记录中设置一个标志 或更新。该标志表示记录提交,并且可能会更改。该标志可以是阻止其他事务访问记录的锁,也可以是 一个警告,指示其他事务应怀疑该记录。它由 retritable transaction(saga 正在成功完成)或通过补偿事务:saga 正在回滚。

When using the semantic lock countermeasure, a saga’s compensatable transaction sets a flag in any record that it creates or updates. The flag indicates that the record isn’t committed and could potentially change. The flag can either be a lock that prevents other transactions from accessing the record or a warning that indicates that other transactions should treat that record with suspicion. It’s cleared by either a retriable transaction—saga is completing successfully—or by a compensating transaction: the saga is rolling back.

该字段是语义锁的一个很好的示例。状态(如 和 )实现语义锁。它们告诉其他访问 an 的 saga 正在更新 .例如,的第一步 是 可补偿事务,在 state 中创建一个。的最后一步是 ,即可重试的事务,将字段更改为 。补偿事务将字段更改为 。Order.state*_PENDINGAPPROVAL_PENDINGREVISION_PENDINGOrderOrderCreate Order SagaOrderAPPROVAL_PENDINGCreate Order SagaAPPROVEDREJECTED

The Order.state field is a great example of a semantic lock. The *_PENDING states, such as APPROVAL_PENDING and REVISION_PENDING, implement a semantic lock. They tell other sagas that access an Order that a saga is in the process of updating the Order. For instance, the first step of the Create Order Saga, which is a compensatable transaction, creates an Order in an APPROVAL_PENDING state. The final step of the Create Order Saga, which is a retriable transaction, changes the field to APPROVED. A compensating transaction changes the field to REJECTED.

管理锁只是问题的一半。您还需要根据具体情况决定 saga 应如何处理 已锁定的记录。例如,考虑 system 命令。客户端可能会调用此操作来取消处于该状态的 that。cancelOrder()OrderAPPROVAL_PENDING

Managing the lock is only half the problem. You also need to decide on a case-by-case basis how a saga should deal with a record that has been locked. Consider, for example, the cancelOrder() system command. A client might invoke this operation to cancel an Order that’s in the APPROVAL_PENDING state.

有几种不同的方法可以处理这种情况。一种选择是 system 命令失败并告诉客户端稍后重试。这种方法的主要优点是易于实现。 但是,缺点是它使客户端更加复杂,因为它必须实现重试逻辑。cancelOrder()

There are a few different ways to handle this scenario. One option is for the cancelOrder() system command to fail and tell the client to try again later. The main benefit of this approach is that it’s simple to implement. The drawback, however, is that it makes the client more complex because it has to implement retry logic.

另一个选项是 to block until lock 被释放。使用语义锁的一个好处是,它们基本上重新创建了提供的隔离 由 ACID 事务。更新同一记录的 Sagas 将被序列化,这大大减少了编程工作量。 另一个好处是它们消除了客户端的重试负担。缺点是应用程序必须管理 锁。它还必须实现死锁检测算法,该算法执行 saga 的回滚以打破死锁并重新执行 它。cancelOrder()

Another option is for cancelOrder() to block until the lock is released. A benefit of using semantic locks is that they essentially recreate the isolation provided by ACID transactions. Sagas that update the same record are serialized, which significantly reduces the programming effort. Another benefit is that they remove the burden of retries from the client. The drawback is that the application must manage locks. It must also implement a deadlock detection algorithm that performs a rollback of a saga to break a deadlock and re-execute it.

对策:交换更新

一种简单的对策是将 update 操作设计为可交换的。如果操作可以按任何顺序执行,则它们是可交换的。的 和 操作是可交换的(如果您忽略透支检查)。此对策非常有用,因为它可以消除丢失的更新。Accountdebit()credit()

One straightforward countermeasure is to design the update operations to be commutative. Operations are commutative if they can be executed in any order. An Account’s debit() and credit() operations are commutative (if you ignore overdraft checks). This countermeasure is useful because it eliminates lost updates.

例如,考虑这样一种情况:在可补偿事务借记(或贷记)后需要回滚 saga 一个帐户。补偿交易可以简单地贷记 (或借记) 帐户以撤消更新。不可能 覆盖其他 Sagas 所做的更新。

Consider, for example, a scenario where a saga needs to be rolled back after a compensatable transaction has debited (or credited) an account. The compensating transaction can simply credit (or debit) the account to undo the update. There’s no possibility of overwriting updates made by other sagas.

对策:悲观观点

另一种处理缺乏孤立感的方法是悲观观点对策。它会对 saga 的步骤进行重新排序,以最大限度地降低由于脏读而导致的业务风险。例如,考虑一下 情景,用于描述脏读异常。在这种情况下,对可用积分执行了脏读,并创建了一个超出消费者信用限额的订单。为了降低发生这种情况的风险,此对策将 :Create Order SagaCancel Order Saga

Another way to deal with the lack of isolation is the pessimistic view countermeasure. It reorders the steps of a saga to minimize business risk due to a dirty read. Consider, for example, the scenario earlier used to describe the dirty read anomaly. In that scenario, the Create Order Saga performed a dirty read of the available credit and created an order that exceeded the consumer credit limit. To reduce the risk of that happening, this countermeasure would reorder the Cancel Order Saga:

  1. 订单服务将 的状态更改为 已取消。Order
  2. Order ServiceChange the state of the Order to cancelled.
  3. 送货服务取消投放。
  4. Delivery ServiceCancel the delivery.
  5. 客户服务增加可用积分。
  6. Customer ServiceIncrease the available credit.

在这个重新排序的 saga 版本中,可用积分在可重试事务中增加,这消除了 脏读的可能性。

In this reordered version of the saga, the available credit is increased in a retriable transaction, which eliminates the possibility of a dirty read.

对策:重新读取值

reread value 对策可防止丢失更新。使用此对策的 saga 在更新记录之前会重新读取记录,并验证 保持不变,然后更新记录。如果记录已更改,则 saga 将中止并可能重新启动。此对策 是 Optimistic Offline Lock 模式 (https://martinfowler.com/eaaCatalog/optimisticOfflineLock.html) 的一种形式。

The reread value countermeasure prevents lost updates. A saga that uses this countermeasure rereads a record before updating it, verifies that it’s unchanged, and then updates the record. If the record has changed, the saga aborts and possibly restarts. This countermeasure is a form of the Optimistic Offline Lock pattern (https://martinfowler.com/eaaCatalog/optimisticOfflineLock.html).

可以使用此对策来处理在审批过程中取消的情况。批准 的事务验证 自 在 saga 中较早创建以来是否未更改。如果保持不变,则事务将批准 .但是,如果 the 已被取消,则事务将中止 saga,这会导致执行其补偿事务。Create Order SagaOrderOrderOrderOrderOrder

The Create Order Saga could use this countermeasure to handle the scenario where the Order is cancelled while it’s in the process of being approved. The transaction that approves the Order verifies that the Order is unchanged since it was created earlier in the saga. If it’s unchanged, the transaction approves the Order. But if the Order has been cancelled, the transaction aborts the saga, which causes its compensating transactions to be executed.

对策:版本文件

版本文件对策之所以如此命名,是因为它记录了对记录执行的操作,以便可以对它们进行重新排序。 这是一种将非交换运算转换为交换运算的方法。要了解此对策的工作原理,请考虑 与 同时执行的场景。除非 saga 使用语义锁对策,否则 可能会在授权卡之前取消消费者信用卡的授权。Create Order SagaCancel Order SagaCancel Order SagaCreate Order Saga

The version file countermeasure is so named because it records the operations that are performed on a record so that it can reorder them. It’s a way to turn noncommutative operations into commutative operations. To see how this countermeasure works, consider a scenario where the Create Order Saga executes concurrently with a Cancel Order Saga. Unless the sagas use the semantic lock countermeasure, it’s possible that the Cancel Order Saga cancels the authorization of the consumer’s credit card before the Create Order Saga authorizes the card.

处理这些无序请求的一种方法是在操作到达时记录操作,然后以正确的 次序。在这种情况下,它将首先记录请求。然后,当 收到后续请求时,它会注意到它已经收到了请求并跳过对信用卡的授权。Accounting ServiceCancel AuthorizationAccounting ServiceAuthorize CardCancel Authorization

One way for the Accounting Service to handle these out-of-order requests is for it to record the operations as they arrive and then execute them in the correct order. In this scenario, it would first record the Cancel Authorization request. Then, when the Accounting Service receives the subsequent Authorize Card request, it would notice that it had already received the Cancel Authorization request and skip authorizing the credit card.

对策:按值

最后一种对策是按值对策。这是一种根据业务风险选择并发机制的策略。使用此 Countermeasure 使用每个请求的属性来决定是使用 Sagas 还是分布式事务。它执行 使用 saga 的低风险请求,可能应用上一节中描述的对策。但它执行高风险 例如,使用分布式事务涉及大量资金的请求。此策略支持应用程序 动态地权衡业务风险、可用性和可扩展性。

The final countermeasure is the by value countermeasure. It’s a strategy for selecting concurrency mechanisms based on business risk. An application that uses this countermeasure uses the properties of each request to decide between using sagas and distributed transactions. It executes low-risk requests using sagas, perhaps applying the countermeasures described in the preceding section. But it executes high-risk requests involving, for example, large amounts of money, using distributed transactions. This strategy enables an application to dynamically make trade-offs about business risk, availability, and scalability.

在应用程序中实施 saga 时,您可能需要使用其中的一种或多种对策。让我们 查看 的详细设计和实现,它使用语义锁对策。Create Order Saga

It’s likely that you’ll need to use one or more of these countermeasures when implementing sagas in your application. Let’s look at the detailed design and implementation of the Create Order Saga, which uses the semantic lock countermeasure.

4.4. Order Service 和 Create Order Saga 的设计

4.4. The design of the Order Service and the Create Order Saga

现在我们已经了解了各种 saga 设计和实现问题,让我们看一个例子。图 4.9 显示了 的设计。该服务的业务逻辑由传统的业务逻辑类组成,例如 和 entity。还有 saga 编排器类,包括编排 .此外,由于参与自己的 sagas,因此它有一个 adapter 类,该类通过调用 来处理命令消息。Order ServiceOrder ServiceOrderCreateOrderSagaCreate Order SagaOrder ServiceOrderCommandHandlersOrderService

Now that we’ve looked at various saga design and implementation issues, let’s see an example. Figure 4.9 shows the design of Order Service. The service’s business logic consists of traditional business logic classes, such as Order Service and the Order entity. There are also saga orchestrator classes, including the CreateOrderSaga class, which orchestrates Create Order Saga. Also, because Order Service participates in its own sagas, it has an OrderCommandHandlers adapter class that handles command messages by invoking OrderService.

图 4.9.的设计及其传奇Order Service

的某些部分应该看起来很熟悉。与传统应用程序一样,业务逻辑的核心由 、 和 类实现。在本章中,我将简要介绍这些类。我在第 5 章中更详细地描述了它们。Order ServiceOrderServiceOrderOrderRepository

Some parts of Order Service should look familiar. As in a traditional application, the core of the business logic is implemented by the OrderService, Order, and OrderRepository classes. In this chapter, I’ll briefly describe these classes. I describe them in more detail in chapter 5.

不太熟悉的是与 saga 相关的类。此服务既是 saga 业务流程协调程序,又是 saga 参与者。 具有多个 Saga 编排器,例如 .saga 编排器使用 saga 参与者代理类(如 和 )向 saga 参与者发送命令消息。saga 参与者代理定义 saga 参与者的消息收发 API。 还有一个类,用于处理 Sagas 发送到 的命令消息。Order ServiceOrder ServiceCreateOrderSagaKitchenServiceProxyOrderServiceProxyOrder ServiceOrderCommandHandlersOrder Service

What’s less familiar about Order Service are the saga-related classes. This service is both a saga orchestrator and a saga participant. Order Service has several saga orchestrators, such as CreateOrderSaga. The saga orchestrators send command messages to a saga participant using a saga participant proxy class, such as KitchenServiceProxy and OrderServiceProxy. A saga participant proxy defines a saga participant’s messaging API. Order Service also has an OrderCommandHandlers class, which handles the command messages sent by sagas to Order Service.

让我们更详细地了解一下设计,从类开始。OrderService

Let’s look in more detail at the design, starting with the OrderService class.

4.4.1. OrderService 类

4.4.1. The OrderService class

该类是由服务的 API 层调用的域服务。它负责创建和管理订单。图 4.10 显示了它的一些合作者。 创建和更新 ,调用 to persist ,并使用 创建 saga,例如 。该类是 Eventuate Tram Saga 框架提供的类之一,该框架是编写 saga 编排器的框架 和参与者,本节稍后将对此进行讨论。OrderServiceOrderServiceOrderServiceOrdersOrderRepositoryOrdersCreateOrderSagaSagaManagerSagaManager

The OrderService class is a domain service called by the service’s API layer. It’s responsible for creating and managing orders. Figure 4.10 shows OrderService and some of its collaborators. OrderService creates and updates Orders, invokes the OrderRepository to persist Orders, and creates sagas, such as the CreateOrderSaga, using the SagaManager. The SagaManager class is one of the classes provided by the Eventuate Tram Saga framework, which is a framework for writing saga orchestrators and participants, and is discussed a little later in this section.

图 4.10.创建和更新 ,调用 要持久 ,并创建 saga,包括 .OrderServiceOrdersOrderRepositoryOrdersCreateOrderSaga

我将在第 5 章中更详细地讨论这个类。现在,让我们专注于方法。下面的清单显示了 的方法。此方法首先创建一个,然后创建一个来验证订单。createOrder()OrderServicecreateOrder()OrderCreateOrderSaga

I’ll discuss this class in more detail in chapter 5. For now, let’s focus on the createOrder() method. The following listing shows OrderService’s createOrder() method. This method first creates an Order and then creates an CreateOrderSaga to validate the order.

清单 4.1.类及其方法OrderServicecreateOrder()
@Transactional                                                           1
 public class OrderService {

  @Autowired
  private SagaManager<CreateOrderSagaState> createOrderSagaManager;

  @Autowired
  private OrderRepository orderRepository;

  @Autowired
  private DomainEventPublisher eventPublisher;

  public Order createOrder(OrderDetails orderDetails) {
    ...
    ResultWithEvents<Order> orderAndEvents = Order.createOrder(...);     2
     Order order = orderAndEvents.result;
    orderRepository.save(order);                                         3

    eventPublisher.publish(Order.class,                                  4
                            Long.toString(order.getId()),
                           orderAndEvents.events);

    CreateOrderSagaState data =
        new CreateOrderSagaState(order.getId(), orderDetails);           5
     createOrderSagaManager.create(data, Order.class, order.getId());

    return order;
  }

  ...
}
@Transactional                                                           1
 public class OrderService {

  @Autowired
  private SagaManager<CreateOrderSagaState> createOrderSagaManager;

  @Autowired
  private OrderRepository orderRepository;

  @Autowired
  private DomainEventPublisher eventPublisher;

  public Order createOrder(OrderDetails orderDetails) {
    ...
    ResultWithEvents<Order> orderAndEvents = Order.createOrder(...);     2
     Order order = orderAndEvents.result;
    orderRepository.save(order);                                         3

    eventPublisher.publish(Order.class,                                  4
                            Long.toString(order.getId()),
                           orderAndEvents.events);

    CreateOrderSagaState data =
        new CreateOrderSagaState(order.getId(), orderDetails);           5
     createOrderSagaManager.create(data, Order.class, order.getId());

    return order;
  }

  ...
}

  • 1 确保 service method 是事务性的。
  • 1 Ensure that service methods are transactional.
  • 2 创建订单。
  • 2 Create the Order.
  • 3 将 Order 保留在数据库中。
  • 3 Persist the Order in the database.
  • 4 发布域事件。
  • 4 Publish domain events.
  • 5 创建 CreateOrderSaga。
  • 5 Create a CreateOrderSaga.

该方法通过调用工厂方法 来创建 。然后,它使用 持久化 ,这是一个基于 JPA 的存储库。它通过调用 来创建 ,传递包含新保存的 ID 的 a 和 .实例化 saga 业务流程协调程序,这会导致它向第一个 saga 参与者发送命令消息,并将 saga 业务流程协调程序保留在数据库中。createOrder()OrderOrder.createOrder()OrderOrderRepositoryCreateOrderSagaSagaManager.create()CreateOrderSagaStateOrderOrderDetailsSagaManager

The createOrder() method creates an Order by calling the factory method Order.createOrder(). It then persists the Order using the OrderRepository, which is a JPA-based repository. It creates the CreateOrderSaga by calling SagaManager.create(), passing a CreateOrderSagaState containing the ID of the newly saved Order and the OrderDetails. The SagaManager instantiates the saga orchestrator, which causes it to send a command message to the first saga participant, and persists the saga orchestrator in the database.

让我们看看 及其关联的类。CreateOrderSaga

Let’s look at the CreateOrderSaga and its associated classes.

4.4.2. Create Order Saga 的实现

4.4.2. The implementation of the Create Order Saga

图 4.11 显示了实现 .每个班级的职责如下:Create Order Saga

Figure 4.11 shows the classes that implement the Create Order Saga. The responsibilities of each class are as follows:

图 4.11.的 saga(如 )是使用 Eventuate Tram Saga 框架实现的。OrderServiceCreate Order Saga

  • CreateOrderSaga— 定义 saga 状态机的单例类。它调用 to create 命令消息,并使用 saga 参与者代理类指定的消息通道将其发送给参与者,例如 .CreateOrderSagaStateKitchenServiceProxy
  • CreateOrderSaga—A singleton class that defines the saga’s state machine. It invokes the CreateOrderSagaState to create command messages and sends them to participants using message channels specified by the saga participant proxy classes, such as KitchenServiceProxy.
  • CreateOrderSagaState— saga 的持久状态,用于创建命令消息。
  • CreateOrderSagaState—A saga’s persistent state, which creates command messages.
  • Saga 参与者代理类,例如 — 每个代理类定义一个 saga 参与者的消息收发 API,该 API 由命令通道、命令消息类型、 和回复类型。KitchenServiceProxy
  • Saga participant proxy classes, such as KitchenServiceProxy—Each proxy class defines a saga participant’s messaging API, which consists of the command channel, the command message types, and the reply types.

这些类是使用 Eventuate Tram Saga 框架编写的。

These classes are written using the Eventuate Tram Saga framework.

Eventuate Tram Saga 框架提供了一种域特定语言 (DSL) 来定义 Saga 的状态机。它执行 saga 的状态机,并使用 Eventuate Tram 框架与 saga 参与者交换消息。该框架还 在数据库中保留 Saga 的状态。

The Eventuate Tram Saga framework provides a domain-specific language (DSL) for defining a saga’s state machine. It executes the saga’s state machine and exchanges messages with saga participants using the Eventuate Tram framework. The framework also persists the saga’s state in the database.

让我们仔细看看 的实现,从类开始。Create Order SagaCreateOrderSaga

Let’s take a closer look at the implementation of Create Order Saga, starting with the CreateOrderSaga class.

CreateOrderSaga 业务流程协调程序

该类实现了前面图 4.7 中所示的状态机。此类实现 ,这是 saga 的基本接口。该类的核心是以下清单中所示的 saga 定义。它使用 Eventuate Tram Saga 框架提供的 DSL 定义 .CreateOrderSagaSimpleSagaCreateOrderSagaCreate Order Saga

The CreateOrderSaga class implements the state machine shown earlier in figure 4.7. This class implements SimpleSaga, a base interface for sagas. The heart of the CreateOrderSaga class is the saga definition shown in the following listing. It uses the DSL provided by the Eventuate Tram Saga framework to define the steps of the Create Order Saga.

清单 4.2.的定义CreateOrderSaga
public class CreateOrderSaga implements SimpleSaga<CreateOrderSagaState> {

  private SagaDefinition<CreateOrderSagaState> sagaDefinition;

  public CreateOrderSaga(OrderServiceProxy orderService,
                         ConsumerServiceProxy consumerService,
                         KitchenServiceProxy kitchenService,
                         AccountingServiceProxy accountingService) {
    this.sagaDefinition =
             step()
              .withCompensation(orderService.reject,
                                CreateOrderSagaState::makeRejectOrderCommand)
            .step()
              .invokeParticipant(consumerService.validateOrder,
                      CreateOrderSagaState::makeValidateOrderByConsumerCommand)
            .step()
              .invokeParticipant(kitchenService.create,
                      CreateOrderSagaState::makeCreateTicketCommand)
              .onReply(CreateTicketReply.class,
                      CreateOrderSagaState::handleCreateTicketReply)
              .withCompensation(kitchenService.cancel,
                  CreateOrderSagaState::makeCancelCreateTicketCommand)
            .step()
              .invokeParticipant(accountingService.authorize,
                      CreateOrderSagaState::makeAuthorizeCommand)
            .step()
              .invokeParticipant(kitchenService.confirmCreate,
                  CreateOrderSagaState::makeConfirmCreateTicketCommand)
            .step()
              .invokeParticipant(orderService.approve,
                                 CreateOrderSagaState::makeApproveOrderCommand)
            .build();
  }

 @Override
 public SagaDefinition<CreateOrderSagaState> getSagaDefinition() {
  return sagaDefinition;
 }
public class CreateOrderSaga implements SimpleSaga<CreateOrderSagaState> {

  private SagaDefinition<CreateOrderSagaState> sagaDefinition;

  public CreateOrderSaga(OrderServiceProxy orderService,
                         ConsumerServiceProxy consumerService,
                         KitchenServiceProxy kitchenService,
                         AccountingServiceProxy accountingService) {
    this.sagaDefinition =
             step()
              .withCompensation(orderService.reject,
                                CreateOrderSagaState::makeRejectOrderCommand)
            .step()
              .invokeParticipant(consumerService.validateOrder,
                      CreateOrderSagaState::makeValidateOrderByConsumerCommand)
            .step()
              .invokeParticipant(kitchenService.create,
                      CreateOrderSagaState::makeCreateTicketCommand)
              .onReply(CreateTicketReply.class,
                      CreateOrderSagaState::handleCreateTicketReply)
              .withCompensation(kitchenService.cancel,
                  CreateOrderSagaState::makeCancelCreateTicketCommand)
            .step()
              .invokeParticipant(accountingService.authorize,
                      CreateOrderSagaState::makeAuthorizeCommand)
            .step()
              .invokeParticipant(kitchenService.confirmCreate,
                  CreateOrderSagaState::makeConfirmCreateTicketCommand)
            .step()
              .invokeParticipant(orderService.approve,
                                 CreateOrderSagaState::makeApproveOrderCommand)
            .build();
  }

 @Override
 public SagaDefinition<CreateOrderSagaState> getSagaDefinition() {
  return sagaDefinition;
 }

的构造函数创建 saga 定义并将其存储在字段中。该方法返回 saga 定义。CreateOrderSagasagaDefinitiongetSagaDefinition()

The CreateOrderSaga’s constructor creates the saga definition and stores it in the sagaDefinition field. The getSagaDefinition() method returns the saga definition.

要了解其工作原理,让我们看看 saga 的第 3 步的定义,如下面的清单所示。传奇的这一步 调用 以创建 .它的补偿事务取消了该 .、 、 和 方法是 Eventuate Tram Saga 提供的 DSL 的一部分。CreateOrderSagaKitchen ServiceTicketTicketstep()invokeParticipant()onReply()withCompensation()

To see how CreateOrderSaga works, let’s look at the definition of the third step of the saga, shown in the following listing. This step of the saga invokes the Kitchen Service to create a Ticket. Its compensating transaction cancels that Ticket. The step(), invokeParticipant(), onReply(), and withCompensation() methods are part of the DSL provided by Eventuate Tram Saga.

清单 4.3.传奇第三步的定义
public class CreateOrderSaga ...

public CreateOrderSaga(..., KitchenServiceProxy kitchenService,
            ...) {
    ...
    .step()
      .invokeParticipant(kitchenService.create,                         1
                 CreateOrderSagaState::makeCreateTicketCommand)
      .onReply(CreateTicketReply.class,
                CreateOrderSagaState::handleCreateTicketReply)          2
       .withCompensation(kitchenService.cancel,                         3
               CreateOrderSagaState::makeCancelCreateTicketCommand)

    ...
  ;
public class CreateOrderSaga ...

public CreateOrderSaga(..., KitchenServiceProxy kitchenService,
            ...) {
    ...
    .step()
      .invokeParticipant(kitchenService.create,                         1
                 CreateOrderSagaState::makeCreateTicketCommand)
      .onReply(CreateTicketReply.class,
                CreateOrderSagaState::handleCreateTicketReply)          2
       .withCompensation(kitchenService.cancel,                         3
               CreateOrderSagaState::makeCancelCreateTicketCommand)

    ...
  ;

  • 1 定义远期事务。
  • 1 Define the forward transaction.
  • 2 收到成功回复后调用 handleCreateTicketReply()。
  • 2 Call handleCreateTicketReply() when a successful reply is received.
  • 3 定义补偿事务。
  • 3 Define the compensating transaction.

对 的调用定义转发事务。它通过调用 来创建命令消息,并将其发送到 指定的通道。对 的调用指定在收到来自 的成功回复时应调用 。此方法将返回的 .对 的调用定义补偿事务。它通过调用 创建命令消息,并将其发送到 指定的通道。invokeParticipant()CreateTicketCreateOrderSagaState.makeCreateTicketCommand()kitchenService.createonReply()CreateOrderSagaState.handleCreateTicketReply()Kitchen ServiceticketIdCreateOrderSagaStatewithCompensation()RejectTicketCommandCreateOrderSagaState.makeCancelCreateTicket()kitchenService.create

The call to invokeParticipant() defines the forward transaction. It creates the CreateTicket command message by calling CreateOrderSagaState.makeCreateTicketCommand() and sends it to the channel specified by kitchenService.create. The call to onReply() specifies that CreateOrderSagaState.handleCreateTicketReply() should be called when a successful reply is received from Kitchen Service. This method stores the returned ticketId in the CreateOrderSagaState. The call to withCompensation() defines the compensating transaction. It creates a RejectTicketCommand command message by calling CreateOrderSagaState.makeCancelCreateTicket() and sends it to the channel specified by kitchenService.create.

saga 的其他步骤以类似的方式定义。这会创建每条消息,该消息由 saga 发送到由 .让我们看一下每个类,从 .CreateOrderSagaStateKitchenServiceProxyCreateOrderSagaState

The other steps of the saga are defined in a similar fashion. The CreateOrderSagaState creates each message, which is sent by the saga to the messaging endpoint defined by a KitchenServiceProxy. Let’s take a look at each of those classes, starting with CreateOrderSagaState.

CreateOrderSagaState 类

该类(如下面的清单所示)表示 saga 实例的状态。此类的实例由 Eventuate Tram Saga 框架创建并持久保存在数据库中。它的主要职责是创建消息 发送给 Saga 参与者。CreateOrderSagaStateOrderService

The CreateOrderSagaState class, shown in the following listing, represents the state of a saga instance. An instance of this class is created by OrderService and is persisted in the database by the Eventuate Tram Saga framework. Its primary responsibility is to create the messages that are sent to saga participants.

清单 4.4.存储 saga 实例的状态CreateOrderSagaState
public class CreateOrderSagaState {

  private Long orderId;

  private OrderDetails orderDetails;
  private long ticketId;

  public Long getOrderId() {
    return orderId;
  }

  private CreateOrderSagaState() {
  }

  public CreateOrderSagaState(Long orderId, OrderDetails orderDetails) {  1
     this.orderId = orderId;
    this.orderDetails = orderDetails;
  }

  CreateTicket makeCreateTicketCommand() {                                2
     return new CreateTicket(getOrderDetails().getRestaurantId(),
                   getOrderId(), makeTicketDetails(getOrderDetails()));
  }

  void handleCreateTicketReply(CreateTicketReply reply) {                 3
     logger.debug("getTicketId {}", reply.getTicketId());
    setTicketId(reply.getTicketId());
  }

  CancelCreateTicket makeCancelCreateTicketCommand() {                    4
     return new CancelCreateTicket(getOrderId());
  }

  ...
public class CreateOrderSagaState {

  private Long orderId;

  private OrderDetails orderDetails;
  private long ticketId;

  public Long getOrderId() {
    return orderId;
  }

  private CreateOrderSagaState() {
  }

  public CreateOrderSagaState(Long orderId, OrderDetails orderDetails) {  1
     this.orderId = orderId;
    this.orderDetails = orderDetails;
  }

  CreateTicket makeCreateTicketCommand() {                                2
     return new CreateTicket(getOrderDetails().getRestaurantId(),
                   getOrderId(), makeTicketDetails(getOrderDetails()));
  }

  void handleCreateTicketReply(CreateTicketReply reply) {                 3
     logger.debug("getTicketId {}", reply.getTicketId());
    setTicketId(reply.getTicketId());
  }

  CancelCreateTicket makeCancelCreateTicketCommand() {                    4
     return new CancelCreateTicket(getOrderId());
  }

  ...

  • 1 由 OrderService 调用以实例化 CreateOrderSagaState
  • 1 Invoked by the OrderService to instantiate a CreateOrderSagaState
  • 2 创建 CreateTicket 命令消息
  • 2 Creates a CreateTicket command message
  • 3 保存新创建的 Ticket 的 ID
  • 3 Saves the ID of the newly created Ticket
  • 4 创建 CancelCreateTicket 命令消息
  • 4 Creates CancelCreateTicket command message

调用 以创建命令消息。它将这些命令消息发送到类定义的终端节点。让我们看一下其中一个类: .CreateOrderSagaCreateOrderSagaStateSagaParticipantProxyKitchenServiceProxy

The CreateOrderSaga invokes the CreateOrderSagaState to create the command messages. It sends those command messages to the endpoints defined by the SagaParticipantProxy classes. Let’s take a look at one of those classes: KitchenServiceProxy.

KitchenServiceProxy 类

该类(如清单 4.5 所示)定义了 的命令消息端点。有三个端点:KitchenServiceProxyKitchen Service

The KitchenServiceProxy class, shown in listing 4.5, defines the command message endpoints for Kitchen Service. There are three endpoints:

  • 创建 - 创建一个Ticket
  • createCreates a Ticket
  • confirmCreate确认创建
  • confirmCreateConfirms the creation
  • cancel- 取消Ticket
  • cancelCancels a Ticket

每个选项都指定命令类型、命令消息的目标通道和预期的回复类型。CommandEndpoint

Each CommandEndpoint specifies the command type, the command message’s destination channel, and the expected reply types.

清单 4.5.定义 的命令消息端点KitchenServiceProxyKitchen Service
public class KitchenServiceProxy {

  public final CommandEndpoint<CreateTicket> create =
        CommandEndpointBuilder
          .forCommand(CreateTicket.class)
          .withChannel(
               KitchenServiceChannels.kitchenServiceChannel)
          .withReply(CreateTicketReply.class)
          .build();

  public final CommandEndpoint<ConfirmCreateTicket> confirmCreate =
         CommandEndpointBuilder
          .forCommand(ConfirmCreateTicket.class)
          .withChannel(
                KitchenServiceChannels.kitchenServiceChannel)
          .withReply(Success.class)
          .build();

  public final CommandEndpoint<CancelCreateTicket> cancel =
        CommandEndpointBuilder
          .forCommand(CancelCreateTicket.class)
          .withChannel(
                 KitchenServiceChannels.kitchenServiceChannel)
          .withReply(Success.class)
          .build();

}
public class KitchenServiceProxy {

  public final CommandEndpoint<CreateTicket> create =
        CommandEndpointBuilder
          .forCommand(CreateTicket.class)
          .withChannel(
               KitchenServiceChannels.kitchenServiceChannel)
          .withReply(CreateTicketReply.class)
          .build();

  public final CommandEndpoint<ConfirmCreateTicket> confirmCreate =
         CommandEndpointBuilder
          .forCommand(ConfirmCreateTicket.class)
          .withChannel(
                KitchenServiceChannels.kitchenServiceChannel)
          .withReply(Success.class)
          .build();

  public final CommandEndpoint<CancelCreateTicket> cancel =
        CommandEndpointBuilder
          .forCommand(CancelCreateTicket.class)
          .withChannel(
                 KitchenServiceChannels.kitchenServiceChannel)
          .withReply(Success.class)
          .build();

}

代理类(如 )并不是绝对必要的。saga 可以简单地直接向参与者发送命令消息。但是代理类有两个 重要的好处。首先,代理类定义静态类型端点,这减少了 saga 发送错误 message 发送到服务。其次,代理类是一个定义明确的 API,用于调用使代码更易于理解的服务 和测试。例如,第 10 章介绍了如何为正确调用 .没有 ,就不可能编写这样一个范围狭窄的测试。KitchenServiceProxyKitchenServiceProxyOrder ServiceKitchen ServiceKitchenServiceProxy

Proxy classes, such as KitchenServiceProxy, aren’t strictly necessary. A saga could simply send command messages directly to participants. But proxy classes have two important benefits. First, a proxy class defines static typed endpoints, which reduces the chance of a saga sending the wrong message to a service. Second, a proxy class is a well-defined API for invoking a service that makes the code easier to understand and test. For example, chapter 10 describes how to write tests for KitchenServiceProxy that verify that Order Service correctly invokes Kitchen Service. Without KitchenServiceProxy, it would be impossible to write such a narrowly scoped test.

Eventuate Tram Saga 框架

Eventuate Tram Saga(如图 4.12 所示)是一个用于编写 saga 编排器和 saga 参与者的框架。它使用 最终的 Tram,在第 3 章中讨论。

The Eventuate Tram Saga, shown in figure 4.12, is a framework for writing both saga orchestrators and saga participants. It uses transactional messaging capabilities of Eventuate Tram, discussed in chapter 3.

图 4.12.Eventuate Tram Saga 是一个用于编写 saga 编排器和 saga 参与者的框架。

包是框架中最复杂的部分。它提供 、 sagas 的基本接口,以及创建和管理 saga 实例的类。句柄保留 saga、发送它生成的命令消息、订阅回复消息以及调用 saga 来处理回复。图 4.13 显示了创建 saga 时的事件顺序。事件顺序如下:saga orchestrationSimpleSagaSagaManagerSagaManagerOrderService

The saga orchestration package is the most complex part of the framework. It provides SimpleSaga, a base interface for sagas, and a SagaManager class, which creates and manages saga instances. The SagaManager handles persisting a saga, sending the command messages that it generates, subscribing to reply messages, and invoking the saga to handle replies. Figure 4.13 shows the sequence of events when OrderService creates a saga. The sequence of events is as follows:

  1. OrderService创建 .CreateOrderSagaState
  2. OrderService creates the CreateOrderSagaState.
  3. 它通过调用 .SagaManager
  4. It creates an instance of a saga by invoking the SagaManager.
  5. 执行 saga 定义的第一步。SagaManager
  6. The SagaManager executes the first step of the saga definition.
  7. 调用 以生成命令消息。CreateOrderSagaState
  8. The CreateOrderSagaState is invoked to generate a command message.
  9. 将命令消息发送给 saga 参与者 (the )。SagaManagerConsumer Service
  10. The SagaManager sends the command message to the saga participant (the Consumer Service).
  11. 将 saga 实例保存在数据库中。SagaManager
  12. The SagaManager saves the saga instance in the database.
图 4.13.创建OrderServiceCreate Order Saga

图 4.14 显示了收到来自 的回复时的事件序列。SagaManagerConsumer Service

Figure 4.14 shows the sequence of events when SagaManager receives a reply from Consumer Service.

图 4.14.收到来自 saga 参与者的回复消息时的事件序列SagaManager

事件顺序如下:

The sequence of events is as follows:

  1. Eventuate Tram 调用,并带有来自 的回复。SagaManagerConsumer Service
  2. Eventuate Tram invokes SagaManager with the reply from Consumer Service.
  3. SagaManager从数据库中检索 Saga 实例。
  4. SagaManager retrieves the saga instance from the database.
  5. SagaManager执行 Saga 定义的下一步。
  6. SagaManager executes the next step of the saga definition.
  7. CreateOrderSagaState生成命令消息。
  8. CreateOrderSagaState is invoked to generate a command message.
  9. SagaManager将命令消息发送给指定的 Saga 参与者 ()。Kitchen Service
  10. SagaManager sends the command message to the specified saga participant (Kitchen Service).
  11. SagaManager将 Update Saga 实例保存在数据库中。
  12. SagaManager saves the update saga instance in the database.

如果 saga 参与者失败,则以相反的顺序执行补偿事务。SagaManager

If a saga participant fails, SagaManager executes the compensating transactions in reverse order.

Eventuate Tram Saga 框架的另一部分是包。它提供了用于编写 saga 参与者的 and 类。这些类将命令消息路由到处理程序方法,这些方法调用 saga 参与者的 业务逻辑并生成回复消息。我们来看看 是如何使用这些类的。saga participantSagaCommandHandlersBuilderSagaCommandDispatcherOrder Service

The other part of the Eventuate Tram Saga framework is the saga participant package. It provides the SagaCommandHandlersBuilder and SagaCommandDispatcher classes for writing saga participants. These classes route command messages to handler methods, which invoke the saga participants’ business logic and generate reply messages. Let’s take a look at how these classes are used by Order Service.

4.4.3. OrderCommandHandlers 类

4.4.3. The OrderCommandHandlers class

Order Service参与自己的 Saga 中。例如,调用 批准或拒绝 .该类(如图 4.15 所示)定义了这些 saga 发送的命令消息的处理程序方法。CreateOrderSagaOrder ServiceOrderOrderCommandHandlers

Order Service participates in its own sagas. For example, CreateOrderSaga invokes Order Service to either approve or reject an Order. The OrderCommandHandlers class, shown in figure 4.15, defines the handler methods for the command messages sent by these sagas.

图 4.15.为各种 Sagas 发送的命令实现命令处理程序。OrderCommandHandlersOrder Service

每个处理程序方法都会调用以更新 an 并生成回复消息。该类将命令消息路由到相应的处理程序方法并发送回复。OrderServiceOrderSagaCommandDispatcher

Each handler method invokes OrderService to update an Order and makes a reply message. The SagaCommandDispatcher class routes the command messages to the appropriate handler method and sends the reply.

下面的清单显示了该类。它的方法将命令消息类型映射到处理程序方法。每个处理程序方法都采用命令消息作为参数,调用 ,并返回回复消息。OrderCommandHandlerscommandHandlers()OrderService

The following listing shows the OrderCommandHandlers class. Its commandHandlers() method maps command message types to handler methods. Each handler method takes a command message as a parameter, invokes OrderService, and returns a reply message.

清单 4.6.的命令处理程序Order Service
public class OrderCommandHandlers {

  @Autowired
  private OrderService orderService;

  public CommandHandlers commandHandlers() {                           1
     return SagaCommandHandlersBuilder
          .fromChannel("orderService")
          .onMessage(ApproveOrderCommand.class, this::approveOrder)
          .onMessage(RejectOrderCommand.class, this::rejectOrder)
          ...
          .build();

  }

  public Message approveOrder(CommandMessage<ApproveOrderCommand> cm) {
    long orderId = cm.getCommand().getOrderId();
    orderService.approveOrder(orderId);                                2
     return withSuccess();                                             3
   }


  public Message rejectOrder(CommandMessage<RejectOrderCommand> cm) {
    long orderId = cm.getCommand().getOrderId();
    orderService.rejectOrder(orderId);                                 4
     return withSuccess();
  }
public class OrderCommandHandlers {

  @Autowired
  private OrderService orderService;

  public CommandHandlers commandHandlers() {                           1
     return SagaCommandHandlersBuilder
          .fromChannel("orderService")
          .onMessage(ApproveOrderCommand.class, this::approveOrder)
          .onMessage(RejectOrderCommand.class, this::rejectOrder)
          ...
          .build();

  }

  public Message approveOrder(CommandMessage<ApproveOrderCommand> cm) {
    long orderId = cm.getCommand().getOrderId();
    orderService.approveOrder(orderId);                                2
     return withSuccess();                                             3
   }


  public Message rejectOrder(CommandMessage<RejectOrderCommand> cm) {
    long orderId = cm.getCommand().getOrderId();
    orderService.rejectOrder(orderId);                                 4
     return withSuccess();
  }

  • 1 将每个命令消息路由到相应的处理程序方法。
  • 1 Route each command message to the appropriate handler method.
  • 2 将 Order 的状态更改为 authorized。
  • 2 Change the state of the Order to authorized.
  • 3 返回通用成功消息。
  • 3 Return a generic success message.
  • 4 将 Order 的状态更改为 rejected。
  • 4 Change the state of the Order to rejected.

的 and 方法通过调用来更新指定的 。参与 Sagas 的其他服务具有类似的命令处理程序类,用于更新其域对象。approveOrder()rejectOrder()OrderOrderService

The approveOrder() and rejectOrder() methods update the specified Order by invoking OrderService. The other services that participate in sagas have similar command handler classes that update their domain objects.

4.4.4. OrderServiceConfiguration 类

4.4.4. The OrderServiceConfiguration class

它使用 Spring 框架。下面的列表是该类的摘录,该类是一个实例化 Spring 并将其连接在一起的类。Order ServiceOrderServiceConfiguration@Configuration@Beans

The Order Service uses the Spring framework. The following listing is an excerpt of the OrderServiceConfiguration class, which is an @Configuration class that instantiates and wires together the Spring @Beans.

清单 4.7.是一个 Spring 类,用于定义 .OrderServiceConfiguration@Configuration@BeansOrder Service
@Configuration
public class OrderServiceConfiguration {

 @Bean
 public OrderService orderService(RestaurantRepository restaurantRepository,
                                  ...
                                  SagaManager<CreateOrderSagaState>
                                          createOrderSagaManager,
                                  ...) {
  return new OrderService(restaurantRepository,
                          ...
                          createOrderSagaManager
                          ...);
 }


 @Bean
 public SagaManager<CreateOrderSagaState> createOrderSagaManager(CreateOrderS
     aga saga) {
  return new SagaManagerImpl<>(saga);
 }


 @Bean
 public CreateOrderSaga createOrderSaga(OrderServiceProxy orderService,
                                        ConsumerServiceProxy consumerService,
                                        ...) {
  return new CreateOrderSaga(orderService, consumerService, ...);
 }


 @Bean
 public OrderCommandHandlers orderCommandHandlers() {
  return new OrderCommandHandlers();
 }


 @Bean
 public SagaCommandDispatcher  orderCommandHandlersDispatcher(OrderCommandHan
     dlers orderCommandHandlers) {
  return new SagaCommandDispatcher("orderService", orderCommandHandlers.comma
     ndHandlers());
 }


 @Bean
 public KitchenServiceProxy kitchenServiceProxy() {
   return new KitchenServiceProxy();
 }

 @Bean
 public OrderServiceProxy orderServiceProxy() {
   return new OrderServiceProxy();
 }

 ...

}
@Configuration
public class OrderServiceConfiguration {

 @Bean
 public OrderService orderService(RestaurantRepository restaurantRepository,
                                  ...
                                  SagaManager<CreateOrderSagaState>
                                          createOrderSagaManager,
                                  ...) {
  return new OrderService(restaurantRepository,
                          ...
                          createOrderSagaManager
                          ...);
 }


 @Bean
 public SagaManager<CreateOrderSagaState> createOrderSagaManager(CreateOrderS
     aga saga) {
  return new SagaManagerImpl<>(saga);
 }


 @Bean
 public CreateOrderSaga createOrderSaga(OrderServiceProxy orderService,
                                        ConsumerServiceProxy consumerService,
                                        ...) {
  return new CreateOrderSaga(orderService, consumerService, ...);
 }


 @Bean
 public OrderCommandHandlers orderCommandHandlers() {
  return new OrderCommandHandlers();
 }


 @Bean
 public SagaCommandDispatcher  orderCommandHandlersDispatcher(OrderCommandHan
     dlers orderCommandHandlers) {
  return new SagaCommandDispatcher("orderService", orderCommandHandlers.comma
     ndHandlers());
 }


 @Bean
 public KitchenServiceProxy kitchenServiceProxy() {
   return new KitchenServiceProxy();
 }

 @Bean
 public OrderServiceProxy orderServiceProxy() {
   return new OrderServiceProxy();
 }

 ...

}

此类定义了多个 Spring,包括 、 和 。它还为各种代理类定义了 Spring,包括 和 。@BeansorderServicecreateOrderSagaManagercreateOrderSagaorderCommandHandlersorderCommandHandlersDispatcher@BeanskitchenServiceProxyorderServiceProxy

This class defines several Spring @Beans including orderService, createOrderSagaManager, createOrderSaga, orderCommandHandlers, and orderCommandHandlersDispatcher. It also defines Spring @Beans for the various proxy classes, including kitchenServiceProxy and orderServiceProxy.

CreateOrderSaga只是众多传奇之一。它的许多其他系统操作也使用 saga。例如,该操作使用 ,而该操作使用 .因此,即使许多服务具有使用同步协议(如 REST 或 gRPC)的外部 API,大型 服务间通信量将使用异步消息传递。Order ServicecancelOrder()Cancel Order SagareviseOrder()Revise Order Saga

CreateOrderSaga is only one of Order Service’s many sagas. Many of its other system operations also use sagas. For example, the cancelOrder() operation uses a Cancel Order Saga, and the reviseOrder() operation uses a Revise Order Saga. As a result, even though many services have an external API that uses a synchronous protocol, such as REST or gRPC, a large amount of interservice communication will use asynchronous messaging.

如您所见,在微服务架构中,事务管理和业务逻辑设计的某些方面完全不同。 幸运的是,saga 编排器通常是非常简单的状态机,您可以使用 saga 框架来简化 法典。然而,事务管理肯定比整体式架构更复杂。但这通常是 为微服务的巨大优势付出的小代价。

As you can see, transaction management and some aspects of business logic design are quite different in a microservice architecture. Fortunately, saga orchestrators are usually quite simple state machines, and you can use a saga framework to simplify your code. Nevertheless, transaction management is certainly more complicated than in a monolithic architecture. But that’s usually a small price to pay for the tremendous benefits of microservices.

总结

Summary

  • 一些系统操作需要更新分散在多个服务中的数据。传统的基于 XA/2PC 的分布式事务 不太适合现代应用程序。更好的方法是使用 Saga 模式。saga 是本地事务的序列 使用消息传递进行协调。每个本地事务都会更新单个服务中的数据。由于每个本地事务 提交其更改,如果 Saga 由于违反业务规则而必须回滚,则它必须执行补偿事务 以显式撤消更改。
  • Some system operations need to update data scattered across multiple services. Traditional, XA/2PC-based distributed transactions aren’t a good fit for modern applications. A better approach is to use the Saga pattern. A saga is sequence of local transactions that are coordinated using messaging. Each local transaction updates data in a single service. Because each local transaction commits its changes, if a saga must roll back due to the violation of a business rule, it must execute compensating transactions to explicitly undo changes.
  • 您可以使用 choreography 或 orchestration 来协调 saga 的步骤。在基于编舞的传奇中,本地 transaction 发布触发其他参与者执行本地事务的事件。在基于编排的 saga 中, 集中式 Saga 编排器向参与者发送命令消息,告诉他们执行本地事务。您可以 通过将 Saga 编排器建模为状态机来简化开发和测试。简单的 Saga 可以使用 Choreography,但是 对于复杂的 Sagas,编排通常是更好的方法。
  • You can use either choreography or orchestration to coordinate the steps of a saga. In a choreography-based saga, a local transaction publishes events that trigger other participants to execute local transactions. In an orchestration-based saga, a centralized saga orchestrator sends command messages to participants telling them to execute local transactions. You can simplify development and testing by modeling saga orchestrators as state machines. Simple sagas can use choreography, but orchestration is usually a better approach for complex sagas.
  • 设计基于 Saga 的业务逻辑可能具有挑战性,因为与 ACID 事务不同,Sagas 并不是彼此隔离的。 您必须经常使用对策,这些对策是防止 ACD 事务导致并发异常的设计策略 型。应用程序甚至可能需要使用锁定来简化业务逻辑,即使这有死锁的风险。
  • Designing saga-based business logic can be challenging because, unlike ACID transactions, sagas aren’t isolated from one another. You must often use countermeasures, which are design strategies that prevent concurrency anomalies caused by the ACD transaction model. An application may even need to use locking in order to simplify the business logic, even though that risks deadlocks.

第 5 章.在微服务架构中设计业务逻辑

Chapter 5. Designing business logic in a microservice architecture

本章涵盖

This chapter covers

  • 应用业务逻辑组织模式:事务脚本模式和域模型模式
  • Applying the business logic organization patterns: Transaction script pattern and Domain model pattern
  • 使用域驱动设计 (DDD) 聚合模式设计业务逻辑
  • Designing business logic with the Domain-driven design (DDD) aggregate pattern
  • 在微服务架构中应用域事件模式
  • Applying the Domain event pattern in a microservice architecture

企业应用程序的核心是业务逻辑,它实现业务规则。开发复杂业务 逻辑总是具有挑战性的。FTGO 应用程序的业务逻辑实现了一些相当复杂的业务逻辑,特别是 用于订单管理和交货管理。Mary 鼓励她的团队应用面向对象的设计原则,因为 根据她的经验,这是实现复杂业务逻辑的最佳方式。一些业务逻辑使用了过程 转录脚本模式。但是 FTGO 应用程序的大部分业务逻辑都是在面向对象的 使用 JPA 映射到数据库的域模型。

The heart of an enterprise application is the business logic, which implements the business rules. Developing complex business logic is always challenging. The FTGO application’s business logic implements some quite complex business logic, especially for order management and delivery management. Mary had encouraged her team to apply object-oriented design principles, because in her experience this was the best way to implement complex business logic. Some of the business logic used the procedural Transcription script pattern. But the majority of the FTGO application’s business logic is implemented in an object-oriented domain model that’s mapped to the database using JPA.

在业务逻辑分散的微服务架构中,开发复杂的业务逻辑更具挑战性 通过多个服务。您需要解决两个关键挑战。首先,典型的域模型是一个由相互连接的类组成的纠缠网络。虽然这不是 在整体式应用程序中,在微服务架构中,类分散在不同的服务中,这是一个问题, 您需要消除原本会跨越服务边界的对象引用。第二个挑战是设计业务 在微服务架构的事务管理约束内工作的逻辑。您的业务逻辑可以使用 ACID 事务,但如第 4 章所述,它必须使用 Saga 模式来维护服务之间的数据一致性。

Developing complex business logic is even more challenging in a microservice architecture where the business logic is spread over multiple services. You need to address two key challenges. First, a typical domain model is a tangled web of interconnected classes. Although this isn’t a problem in a monolithic application, in a microservice architecture, where classes are scattered around different services, you need to eliminate object references that would otherwise span service boundaries. The second challenge is designing business logic that works within the transaction management constraints of a microservice architecture. Your business logic can use ACID transactions within services, but as described in chapter 4, it must use the Saga pattern to maintain data consistency across services.

幸运的是,我们可以通过使用 DDD 中的 Aggregate 模式来解决这些问题。Aggregate 模式构建了服务的 业务逻辑作为聚合的集合。聚合是可视为一个单元的对象群集。聚合在开发业务时很有用有两个原因 微服务架构中的逻辑:

Fortunately, we can address these issues by using the Aggregate pattern from DDD. The Aggregate pattern structures a service’s business logic as a collection of aggregates. An aggregate is a cluster of objects that can be treated as a unit. There are two reasons why aggregates are useful when developing business logic in a microservice architecture:

  • 聚合避免了跨服务边界的对象引用的任何可能性,因为聚合间引用是 主键值,而不是对象引用。
  • Aggregates avoid any possibility of object references spanning service boundaries, because an inter-aggregate reference is a primary key value rather than an object reference.
  • 由于事务只能创建或更新单个聚合,因此聚合符合微服务事务的约束 型。
  • Because a transaction can only create or update a single aggregate, aggregates fit the constraints of the microservices transaction model.

因此,可以保证 ACID 事务位于单个服务中。

As a result, an ACID transaction is guaranteed to be within a single service.

本章首先,我将介绍组织业务逻辑的不同方法:Transcription 脚本模式和 域模型模式。接下来,我将介绍 DDD 聚合的概念,并解释为什么它是服务的 业务逻辑。然后,我将描述 Domain 事件模式事件,并解释为什么它对服务发布很有用 事件。在本章的结尾,我以 和 中的几个业务逻辑示例结束。Kitchen ServiceOrder Service

I begin this chapter by describing the different ways of organizing business logic: the Transcription script pattern and the Domain model pattern. Next I introduce the concept of a DDD aggregate and explain why it’s a good building block for a service’s business logic. After that, I describe the Domain event pattern events and explain why it’s useful for a service to publish events. I end this chapter with a couple of examples of business logic from Kitchen Service and Order Service.

现在让我们看看业务逻辑组织模式。

Let’s now look at business logic organization patterns.

5.1. 业务逻辑组织模式

5.1. Business logic organization patterns

图 5.1 显示了典型服务的架构。如第 2 章所述,业务逻辑是六边形架构的核心。围绕业务逻辑的是入站和出站 适配器。入站适配器处理来自客户端的请求并调用业务逻辑。由业务逻辑调用的出站适配器将调用其他服务和应用程序。

Figure 5.1 shows the architecture of a typical service. As described in chapter 2, the business logic is the core of a hexagonal architecture. Surrounding the business logic are the inbound and outbound adapters. An inbound adapter handles requests from clients and invokes the business logic. An outbound adapter, which is invoked by the business logic, invokes other services and applications.

图 5.1.具有六边形结构。它由业务逻辑和一个或多个与外部应用程序接口的适配器组成 和其他服务。Order Service

此服务由业务逻辑和以下适配器组成:

This service consists of the business logic and the following adapters:

  • REST API 适配器 - 一个入站适配器,用于实现调用业务逻辑的 REST API
  • REST API adapterAn inbound adapter that implements a REST API which invokes the business logic
  • OrderCommandHandlers一个入站适配器,它使用来自消息通道的命令消息并调用业务逻辑
  • OrderCommandHandlersAn inbound adapter that consumes command messages from a message channel and invokes the business logic
  • Database Adapter (数据库适配器) - 由业务逻辑调用以访问数据库的出站适配器
  • Database AdapterAn outbound adapter that’s invoked by the business logic to access the database
  • 域事件发布适配器 - 将事件发布到消息代理的出站适配器
  • Domain Event Publishing AdapterAn outbound adapter that publishes events to a message broker

业务逻辑通常是服务中最复杂的部分。在开发业务逻辑时,您应该有意识地 以最适合您的应用程序的方式组织您的业务逻辑。毕竟,我相信你已经经历过 不得不维护别人结构不良的代码的挫败感。大多数企业应用程序都是用 面向对象的语言,例如 Java,因此它们由类和方法组成。但是使用面向对象语言则不会 保证业务逻辑具有面向对象的设计。开发业务逻辑时必须做出的关键决策 是使用面向对象的方法还是过程方法。组织业务逻辑有两种主要模式:过程 Transaction 脚本模式和面向对象的 Domain 模型模式。

The business logic is typically the most complex part of the service. When developing business logic, you should consciously organize your business logic in the way that’s most appropriate for your application. After all, I’m sure you’ve experienced the frustration of having to maintain someone else’s badly structured code. Most enterprise applications are written in an object-oriented language such as Java, so they consist of classes and methods. But using an object-oriented language doesn’t guarantee that the business logic has an object-oriented design. The key decision you must make when developing business logic is whether to use an object-oriented approach or a procedural approach. There are two main patterns for organizing business logic: the procedural Transaction script pattern, and the object-oriented Domain model pattern.

5.1.1. 使用 Transaction 脚本模式设计业务逻辑

5.1.1. Designing business logic using the Transaction script pattern

尽管我是面向对象方法的强烈倡导者,但在某些情况下它有点矫枉过正,例如当 您正在开发简单的业务逻辑。在这种情况下,更好的方法是编写过程代码并使用 Martin Fowler 所著的 Patterns of Enterprise Application Architecture (Addison-Wesley Professional, 2002) 一书将 Transaction 脚本模式称为 Transaction 脚本模式。而不是做任何面向对象的 设计中,您可以编写一个名为 Transaction Script 的方法来处理来自表示层的每个请求。如图 5.2 所示,这种方法的一个重要特征是实现行为的类与实现行为的类是分开的 store 状态。

Although I’m a strong advocate of the object-oriented approach, there are some situations where it is overkill, such as when you are developing simple business logic. In such a situation, a better approach is to write procedural code and use what the book Patterns of Enterprise Application Architecture by Martin Fowler (Addison-Wesley Professional, 2002) calls the Transaction script pattern. Rather than doing any object-oriented design, you write a method called a transaction script to handle each request from the presentation tier. As figure 5.2 shows, an important characteristic of this approach is that the classes that implement behavior are separate from those that store state.

图 5.2.将业务逻辑组织为事务脚本。在典型的基于事务脚本的设计中,一组类实现 behavior 和另一个 set 存储 state。事务脚本被组织成通常没有状态的类。这 脚本使用数据类,这些数据类通常没有行为。

当使用 Transaction 脚本模式时,脚本通常位于 service classes 中,在此示例中是 class 。服务类对于每个请求/系统操作都有一个方法。该方法实现了该业务逻辑 请求。它使用数据访问对象 (DAO) 访问数据库,例如 .数据对象(在本例中为类)是具有很少或没有行为的纯数据。OrderServiceOrderDaoOrder

When using the Transaction script pattern, the scripts are usually located in service classes, which in this example is the OrderService class. A service class has one method for each request/system operation. The method implements the business logic for that request. It accesses the database using data access objects (DAOs), such as the OrderDao. The data objects, which in this example is the Order class, are pure data with little or no behavior.

模式:交易脚本

将业务逻辑组织为过程事务脚本的集合,每种类型的请求对应一个脚本。

Organize the business logic as a collection of procedural transaction scripts, one for each type of request.

这种设计风格是高度过程化的,并且几乎不依赖于面向对象的编程 (OOP) 语言的功能。 如果您使用 C 或其他非 OOP 语言编写应用程序,您将创建此内容。不过,您不应该这样做 在适当的时候使用程序化设计感到羞耻。此方法适用于简单的业务逻辑。缺点 是这往往不是实现复杂业务逻辑的好方法。

This style of design is highly procedural and relies on few of the capabilities of object-oriented programming (OOP) languages. This what you would create if you were writing the application in C or another non-OOP language. Nevertheless, you shouldn’t be ashamed to use a procedural design when it’s appropriate. This approach works well for simple business logic. The drawback is that this tends not to be a good way to implement complex business logic.

5.1.2. 使用 Domain 模型模式设计业务逻辑

5.1.2. Designing business logic using the Domain model pattern

程序方法的简单性可能非常诱人。您可以编写代码,而不必仔细考虑 如何组织课程。问题是,如果您的业务逻辑变得复杂,您最终可能会得到 维护的噩梦。事实上,就像整体式应用程序具有不断增长的习惯一样,事务 脚本也有同样的问题。因此,除非你正在编写一个非常简单的应用程序,否则你应该抵制 编写过程代码,改为应用 Domain 模型模式并开发面向对象的设计的诱惑。

The simplicity of the procedural approach can be quite seductive. You can write code without having to carefully consider how to organize the classes. The problem is that if your business logic becomes complex, you can end up with code that’s a nightmare to maintain. In fact, in the same way that a monolithic application has a habit of continually growing, transaction scripts have the same problem. Consequently, unless you’re writing an extremely simple application, you should resist the temptation to write procedural code and instead apply the Domain model pattern and develop an object-oriented design.

模式:域模型

将业务逻辑组织为由具有状态和行为的类组成的对象模型。

Organize the business logic as an object model consisting of classes that have state and behavior.

在面向对象的设计中,业务逻辑由对象模型(相对较小的类的网络)组成。这些 类通常直接对应于问题域中的概念。在这样的设计中,一些类只有 state 或 behavior,但许多都包含两者,这是设计良好的类的标志。图 5.3 显示了 Domain 模型模式的示例。

In an object-oriented design, the business logic consists of an object model, a network of relatively small classes. These classes typically correspond directly to concepts from the problem domain. In such a design some classes have only either state or behavior, but many contain both, which is the hallmark of a well-designed class. Figure 5.3 shows an example of the Domain model pattern.

图 5.3.将业务逻辑组织为域模型。大多数业务逻辑由具有 state 和 behavior 的类组成。

与 Transaction 脚本模式一样,一个类具有每个请求/系统操作的方法。但是当使用 Domain 模型模式时,服务方法通常是 简单。这是因为服务方法几乎总是委托给持久域对象,其中包含大部分 业务逻辑。例如,服务方法可以从数据库中加载域对象并调用其方法之一。 在此示例中,该类同时具有 state 和 behavior。此外,它的状态是私有的,只能通过其方法间接访问。OrderServiceOrder

As with the Transaction script pattern, an OrderService class has a method for each request/system operation. But when using the Domain model pattern, the service methods are usually simple. That’s because a service method almost always delegates to persistent domain objects, which contain the bulk of the business logic. A service method might, for example, load a domain object from the database and invoke one of its methods. In this example, the Order class has both state and behavior. Moreover, its state is private and can only be accessed indirectly via its methods.

使用面向对象的设计有很多好处。首先,设计易于理解和维护。而不是 它由一个做所有事情的大类组成,它由许多小类组成,每个小类都有少量的 责任。此外,像 , , 和 这样的类与现实世界密切相关,这使得它们在设计中的角色更容易理解。二、我们的面向对象设计 更容易测试:每个类都可以而且应该独立测试。最后,面向对象的设计更易于扩展 因为它可以使用定义方法的已知设计模式,例如 Strategy 模式和 Template 方法模式 在不修改代码的情况下扩展组件。AccountBankingTransactionOverdraftPolicy

Using an object-oriented design has a number of benefits. First, the design is easy to understand and maintain. Instead of consisting of one big class that does everything, it consists of a number of small classes that each have a small number of responsibilities. In addition, classes such as Account, BankingTransaction, and OverdraftPolicy closely mirror the real world, which makes their role in the design easier to understand. Second, our object-oriented design is easier to test: each class can and should be tested independently. Finally, an object-oriented design is easier to extend because it can use well-known design patterns, such as the Strategy pattern and the Template method pattern, that define ways of extending a component without modifying the code.

Domain model 模式运行良好,但这种方法存在许多问题,尤其是在微服务架构中。 要解决这些问题,您需要使用称为 DDD 的 OOD 优化。

The Domain model pattern works well, but there are a number of problems with this approach, especially in a microservice architecture. To address those problems, you need to use a refinement of OOD known as DDD.

5.1.3. 关于域驱动设计

5.1.3. About Domain-driven design

DDD 在 Eric Evans 的 Domain-Driven Design 一书 (Addison-Wesley Professional, 2003) 中进行了介绍,它是 OOD 的改进,是一种开发复杂业务的方法 逻辑。在第 2 章中,我讨论了在将应用程序分解为服务时 DDD 子域的有用性,并介绍了 DDD。使用 DDD 时,每个服务 拥有自己的域模型,这避免了单个应用程序范围的域模型的问题。子域和关联的 界定上下文的概念是两种战略性的 DDD 模式。

DDD, which is described in the book Domain-Driven Design by Eric Evans (Addison-Wesley Professional, 2003), is a refinement of OOD and is an approach for developing complex business logic. I introduced DDD in chapter 2 when discussing the usefulness of DDD subdomains when decomposing an application into services. When using DDD, each service has its own domain model, which avoids the problems of a single, application-wide domain model. Subdomains and the associated concept of Bounded Context are two of the strategic DDD patterns.

DDD 也有一些战术模式,这些模式是域模型的构建块。每个模式都是一个类所扮演的角色 并定义类的特征。开发人员已广泛采用的构建块 包括以下内容:

DDD also has some tactical patterns that are building blocks for domain models. Each pattern is a role that a class plays in a domain model and defines the characteristics of the class. The building blocks that have been widely adopted by developers include the following:

  • 实体 - 具有持久标识的对象。属性具有相同值的两个实体仍然是不同的对象。 在 Java EE 应用程序中,使用 JPA 持久保存的类通常是 DDD 实体。@Entity
  • EntityAn object that has a persistent identity. Two entities whose attributes have the same values are still different objects. In a Java EE application, classes that are persisted using JPA @Entity are usually DDD entities.
  • Value 对象 - 一个对象,它是值的集合。属性具有相同值的两个值对象可以互换使用。 value 对象的一个示例是类,它由 currency 和 amount 组成。Money
  • Value objectAn object that is a collection of values. Two value objects whose attributes have the same values can be used interchangeably. An example of a value object is a Money class, which consists of a currency and an amount.
  • 工厂— 实现对象创建逻辑的对象或方法,该逻辑太复杂,无法由构造函数直接完成。它可以 此外,还会隐藏实例化的具体类。工厂可以实现为类的静态方法。
  • FactoryAn object or method that implements object creation logic that’s too complex to be done directly by a constructor. It can also hide the concrete classes that are instantiated. A factory might be implemented as a static method of a class.
  • 存储库 - 一个对象,它提供对持久实体的访问并封装访问数据库的机制。
  • RepositoryAn object that provides access to persistent entities and encapsulates the mechanism for accessing the database.
  • 服务实现不属于实体或值对象的业务逻辑的对象。
  • ServiceAn object that implements business logic that doesn’t belong in an entity or a value object.

许多开发人员都在使用这些构建块。有些框架(如 JPA 和 Spring 框架)支持。 除了 DDD 纯粹主义者之外,还有一个构建块通常被忽略(包括我自己):聚合。如 事实证明,在开发微服务时,聚合是一个非常有用的概念。我们先来看一些细微的问题 使用使用聚合解决的经典 OOD。

These building blocks are used by many developers. Some are supported by frameworks such as JPA and the Spring framework. There is one more building block that has been generally ignored (myself included!) except by DDD purists: aggregates. As it turns out, aggregates are an extremely useful concept when developing microservices. Let’s first look at some subtle problems with classic OOD that are solved by using aggregates.

5.2. 使用 DDD 聚合模式设计域模型

5.2. Designing a domain model using the DDD aggregate pattern

在传统的面向对象设计中,域模型是类和类之间关系的集合。类 通常被组织成包。例如,图 5.4 显示了 FTGO 应用程序域模型的一部分。它是一个典型的域模型,由一个互连的 Web 组成 类。

In traditional object-oriented design, a domain model is a collection of classes and relationships between classes. The classes are usually organized into packages. For example, figure 5.4 shows part of a domain model for the FTGO application. It’s a typical domain model consisting of a web of interconnected classes.

图 5.4.传统的域模型是一个由互连类组成的 Web。它没有明确指定业务对象的边界, 例如 和 。ConsumerOrder

此示例具有几个与业务对象对应的类:、、 和 。但有趣的是,这种传统的领域模型缺少每个业务对象的显式边界。 例如,它不指定哪些类是业务对象的一部分。这种缺乏边界的情况有时会导致问题,尤其是在微服务架构中。ConsumerOrderRestaurantCourierOrder

This example has several classes corresponding to business objects: Consumer, Order, Restaurant, and Courier. But interestingly, the explicit boundaries of each business object are missing from this kind of traditional domain model. It doesn’t specify, for example, which classes are part of the Order business object. This lack of boundaries can sometimes cause problems, especially in microservice architecture.

在本节中,我从一个由于缺少显式边界而导致的示例问题开始。接下来,我描述 aggregate 以及它如何具有明确的边界。之后,我将介绍聚合必须遵守的规则以及它们如何构成 聚合非常适合微服务架构。然后,我将介绍如何仔细选择聚合的边界 以及为什么它很重要。最后,我将讨论如何使用聚合设计业务逻辑。我们先来看看问题 由模糊边界引起。

I begin this section with an example problem caused by the lack of explicit boundaries. Next I describe the concept of an aggregate and how it has explicit boundaries. After that, I describe the rules that aggregates must obey and how they make aggregates a good fit for the microservice architecture. I then describe how to carefully choose the boundaries of your aggregates and why it matters. Finally, I discuss how to design business logic using aggregates. Let’s first take a look at the problems caused by fuzzy boundaries.

5.2.1. 模糊边界的问题

5.2.1. The problem with fuzzy boundaries

例如,假设您要对业务对象执行操作,例如加载或删除。这到底是什么意思?操作的范围是什么?您肯定会加载或删除对象。但实际上,an 不仅仅是对象。还有订单行项目、付款信息等。图 5.4 将域对象的边界留给开发人员的直觉。OrderOrderOrderOrder

Imagine, for example, that you want to perform an operation, such as a load or delete, on an Order business object. What exactly does that mean? What is the scope an operation? You would certainly load or delete the Order object. But in reality there’s more to an Order than simply the Order object. There are also the order line items, the payment information, and so on. Figure 5.4 leaves the boundaries of a domain object to the developer’s intuition.

除了概念模糊之外,缺少显式边界还会导致更新业务对象时出现问题。典型的 业务对象具有不变量,即必须始终强制执行的业务规则。例如,An 有最低订单金额。FTGO 应用程序必须确保任何更新订单的尝试都不会违规 一个不变量,例如 Minimum Order Amount。挑战在于,为了强制执行不变量,您必须设计 业务逻辑。Order

Besides a conceptual fuzziness, the lack of explicit boundaries causes problems when updating a business object. A typical business object has invariants, business rules that must be enforced at all times. An Order has a minimum order amount, for example. The FTGO application must ensure that any attempt to update an order doesn’t violate an invariant such as the minimum order amount. The challenge is that in order to enforce invariants, you must design your business logic carefully.

例如,让我们看看当多个使用者共同创建订单时,如何确保满足订单最小值。 两个使用者 — Sam 和 Mary — 正在共同处理一个订单,并同时确定该订单超出了他们的预算。 山姆减少了萨摩萨的量,玛丽减少了馕饼的量。从应用程序的角度来看,两者 使用者从数据库中检索 Order 及其 Line Item。然后,两个使用者都会更新一个行项目以降低成本 的订单。从每个消费者的角度来看,最低订单额被保留。下面是数据库事务的顺序。

For example, let’s look at how to ensure the order minimum is met when multiple consumers work together to create an order. Two consumers—Sam and Mary—are working together on an order and simultaneously decide that the order exceeds their budget. Sam reduces the quantity of samosas, and Mary reduces the quantity of naan bread. From the application’s perspective, both consumers retrieve the order and its line items from the database. Both consumers then update a line item to reduce the cost of the order. From each consumer’s perspective the order minimum is preserved. Here’s the sequence of database transactions.

Consumer - Mary

BEGIN TXN

   SELECT ORDER_TOTAL FROM ORDER
     WHERE ORDER ID = X

   SELECT * FROM ORDER_LINE_ITEM
      WHERE ORDER_ID = X
   ...
END TXN

Verify minimum is met
Consumer - Mary

BEGIN TXN

   SELECT ORDER_TOTAL FROM ORDER
     WHERE ORDER ID = X

   SELECT * FROM ORDER_LINE_ITEM
      WHERE ORDER_ID = X
   ...
END TXN

Verify minimum is met
Consumer - Sam

BEGIN TXN

   SELECT ORDER_TOTAL FROM ORDER
     WHERE ORDER ID = X

   SELECT * FROM ORDER_LINE_ITEM
      WHERE ORDER_ID = X
   ...
END TXN
Consumer - Sam

BEGIN TXN

   SELECT ORDER_TOTAL FROM ORDER
     WHERE ORDER ID = X

   SELECT * FROM ORDER_LINE_ITEM
      WHERE ORDER_ID = X
   ...
END TXN
BEGIN TXN

   UPDATE ORDER_LINE_ITEM
     SET VERSION=..., QUANTITY=...
   WHERE VERSION = <loaded version>
    AND ID = ...

END TXN
BEGIN TXN

   UPDATE ORDER_LINE_ITEM
     SET VERSION=..., QUANTITY=...
   WHERE VERSION = <loaded version>
    AND ID = ...

END TXN
 
 
Verify minimum is met

BEGIN TXN

   UPDATE ORDER_LINE_ITEM
     SET VERSION=..., QUANTITY=...
   WHERE VERSION = <loaded version>
    AND ID = ...

END TXN
Verify minimum is met

BEGIN TXN

   UPDATE ORDER_LINE_ITEM
     SET VERSION=..., QUANTITY=...
   WHERE VERSION = <loaded version>
    AND ID = ...

END TXN

每个消费者使用两个交易的序列更改一个行项目。第一个事务加载订单及其行 项目。UI 在执行第二个事务之前验证是否满足订单最小值。第二笔交易 使用乐观离线锁定检查更新行项目数量,该检查验证订单行是否保持不变,因为 它由第一个事务加载。

Each consumer changes a line item using a sequence of two transactions. The first transaction loads the order and its line items. The UI verifies that the order minimum is satisfied before executing the second transaction. The second transaction updates the line item quantity using an optimistic offline locking check that verifies that the order line is unchanged since it was loaded by the first transaction.

在此方案中,Sam 将订单总额减少 $X,Mary 将订单总额减少 $Y。因此,即使应用程序验证了该订单在每个消费者的 更新。如您所见,直接更新业务对象的一部分可能会导致违反业务规则。DDD 聚合旨在解决此问题。Order

In this scenario, Sam reduces the order total by $X and Mary reduces it by $Y. As a result, the Order is no longer valid, even though the application verified that the order still satisfied the order minimum after each consumer’s update. As you can see, directly updating part of a business object can result in the violation of the business rules. DDD aggregates are intended to solve this problem.

5.2.2. 聚合具有显式边界

5.2.2. Aggregates have explicit boundaries

聚合是边界内的域对象群集,可以被视为一个单元。它由一个根实体组成,并且可能 一个或多个其他实体和值对象。许多业务对象被建模为聚合。例如,在第 2 章中,我们通过分析需求中使用的名词和领域专家创建了一个粗略的领域模型。其中许多名词, 例如 、 和 、 是聚合。OrderConsumerRestaurant

An aggregate is a cluster of domain objects within a boundary that can be treated as a unit. It consists of a root entity and possibly one or more other entities and value objects. Many business objects are modeled as aggregates. For example, in chapter 2 we created a rough domain model by analyzing the nouns used in the requirements and by domain experts. Many of these nouns, such as Order, Consumer, and Restaurant, are aggregates.

模式:聚合

将域模型组织为聚合集合,每个聚合都是可视为一个单元的对象图。

Organize a domain model as a collection of aggregates, each of which is a graph of objects that can be treated as a unit.

图 5.5 显示了聚合及其边界。聚合由一个实体、一个或多个值对象以及其他值对象(如投放和 )组成。OrderOrderOrderOrderLineItemAddressPaymentInformation

Figure 5.5 shows the Order aggregate and its boundary. An Order aggregate consists of an Order entity, one or more OrderLineItem value objects, and other value objects such as a delivery Address and PaymentInformation.

图 5.5.将域模型构建为一组聚合使边界明确。

聚合将域模型分解为块,这些块单独更易于理解。他们还阐明了范围 的 load、update 和 delete 等操作。这些操作作用于整个聚合,而不是其部分。一 聚合通常从数据库中完整加载,从而避免了延迟加载的任何复杂性。删除 聚合从数据库中删除其所有对象。

Aggregates decompose a domain model into chunks, which are individually easier to understand. They also clarify the scope of operations such as load, update, and delete. These operations act on the entire aggregate rather than on parts of it. An aggregate is often loaded in its entirety from the database, thereby avoiding any complications of lazy loading. Deleting an aggregate removes all of its objects from a database.

聚合是一致性边界

更新整个聚合而不是其部分可以解决一致性问题,例如前面描述的示例。更新 操作在聚合根上调用,这将强制执行不变量。此外,并发是通过锁定聚合来处理的 root 使用版本号或数据库级锁等。例如,不要更新订单项的数量 客户端必须在聚合的根上直接调用方法,该方法强制执行不变量,例如最小订单金额。但请注意,此方法不需要 要在数据库中更新的整个聚合。例如,应用程序可能只更新与对象对应的行,并且更新的 .OrderOrderOrderLineItem

Updating an entire aggregate rather than its parts solves the consistency issues, such as the example described earlier. Update operations are invoked on the aggregate root, which enforces invariants. Also, concurrency is handled by locking the aggregate root using, for example, a version number or a database-level lock. For example, instead of updating line items’ quantities directly, a client must invoke a method on the root of the Order aggregate, which enforces invariants such as the minimum order amount. Note, though, that this approach doesn’t require the entire aggregate to be updated in the database. An application might, for example, only update the rows corresponding to the Order object and the updated OrderLineItem.

识别聚合体是关键

在 DDD 中,设计域模型的一个关键部分是识别聚合、它们的边界和根。的详细信息 聚合的内部结构是次要的。但是,聚合的好处远不止将域模块化 型。这是因为聚合必须遵循某些规则。

In DDD, a key part of designing a domain model is identifying aggregates, their boundaries, and their roots. The details of the aggregates’ internal structure is secondary. The benefit of aggregates, however, goes far beyond modularizing a domain model. That’s because aggregates must obey certain rules.

5.2.3. 聚合规则

5.2.3. Aggregate rules

DDD 要求聚合遵守一组规则。这些规则可确保聚合是可以强制实施的自包含单元 它的不变量。让我们看看每个规则。

DDD requires aggregates to obey a set of rules. These rules ensure that an aggregate is a self-contained unit that can enforce its invariants. Let’s look at each of the rules.

规则 #1:仅引用聚合根

前面的示例说明了直接更新的危险。第一个 aggregate 规则的目标是消除此问题。它要求根实体是唯一的 聚合的一部分,可由聚合外部的类引用。客户端只能通过以下方式更新聚合 在聚合根上调用方法。OrderLineItems

The previous example illustrated the perils of updating OrderLineItems directly. The goal of the first aggregate rule is to eliminate this problem. It requires that the root entity be the only part of an aggregate that can be referenced by classes outside of the aggregate. A client can only update an aggregate by invoking a method on the aggregate root.

例如,服务使用存储库从数据库加载聚合并获取对聚合根的引用。 它通过在聚合根上调用方法来更新聚合。此规则确保聚合可以强制执行其固定条件。

A service, for example, uses a repository to load an aggregate from the database and obtain a reference to the aggregate root. It updates an aggregate by invoking a method on the aggregate root. This rule ensures that the aggregate can enforce its invariant.

规则 #2:聚合间引用必须使用主键

另一个规则是聚合通过身份(例如,主键)而不是对象引用来相互引用。 例如,如图 5.6 所示,an 使用对对象的引用而不是引用来引用它。同样,an 使用 .OrderConsumerconsumerIdConsumerOrderRestaurantrestaurantId

Another rule is that aggregates reference each other by identity (for example, primary key) instead of object references. For example, as figure 5.6 shows, an Order references its Consumer using a consumerId rather than a reference to the Consumer object. Similarly, an Order references a Restaurant using a restaurantId.

图 5.6.聚合之间的引用是按主键而不是按对象引用进行的。聚合具有 和 聚合的 ID。在聚合中,对象彼此引用。OrderConsumerRestaurant

这种方法与传统的对象建模完全不同,传统的对象建模将域模型中的外键视为 一种设计的味道。它有很多好处。使用标识而不是对象引用意味着聚合是 松散耦合。它确保聚合之间的聚合边界定义明确,并避免意外更新 不同的聚合。此外,如果聚合是另一个服务的一部分,则不存在跨 服务业。

This approach is quite different from traditional object modeling, which considers foreign keys in the domain model to be a design smell. It has a number of benefits. The use of identity rather than object references means that the aggregates are loosely coupled. It ensures that the aggregate boundaries between aggregates are well defined and avoids accidentally updating a different aggregate. Also, if an aggregate is part of another service, there isn’t a problem of object references that span services.

这种方法还简化了持久性,因为聚合是存储单位。它使存储聚合变得更加容易 在 MongoDB 等 NoSQL 数据库中。它还消除了对透明延迟加载及其相关问题的需求。 通过分片聚合来扩展数据库相对简单。

This approach also simplifies persistence since the aggregate is the unit of storage. It makes it easier to store aggregates in a NoSQL database such as MongoDB. It also eliminates the need for transparent lazy loading and its associated problems. Scaling the database by sharding aggregates is relatively straightforward.

规则 #3:一个事务创建或更新一个聚合

聚合必须遵守的另一个规则是事务只能创建或更新单个聚合。当我第一次阅读 关于多年前,这条规则毫无意义!当时,我正在开发传统的单体式应用程序,这些应用程序使用 一个 RDBMS,以便事务可以更新多个聚合。今天,这个约束非常适合微服务架构。 它确保事务包含在服务中。此约束还与 大多数 NoSQL 数据库。

Another rule that aggregates must obey is that a transaction can only create or update a single aggregate. When I first read about it many years ago, this rule made no sense! At the time, I was developing traditional monolithic applications that used an RDBMS, so transactions could update multiple aggregates. Today, this constraint is perfect for the microservice architecture. It ensures that a transaction is contained within a service. This constraint also matches the limited transaction model of most NoSQL databases.

此规则使实施需要创建或更新多个聚合的操作变得更加复杂。但这是 这正是 Sagas(在第 4 章中描述的)旨在解决的问题。saga 的每个步骤都只创建或更新一个聚合。图 5.7 显示了其工作原理。

This rule makes it more complicated to implement operations that need to create or update multiple aggregates. But this is exactly the problem that sagas (described in chapter 4) are designed to solve. Each step of the saga creates or updates exactly one aggregate. Figure 5.7 shows how this works.

图 5.7.事务只能创建或更新单个聚合,因此应用程序使用 saga 更新多个聚合。每 该步骤创建或更新一个聚合。

在此示例中,saga 由三个事务组成。第一个事务更新 aggregate in service。其他两个事务都是 in service 。一个事务更新 aggregate ,另一个事务更新 aggregate 。XABXY

In this example, the saga consists of three transactions. The first transaction updates aggregate X in service A. The other two transactions are both in service B. One transaction updates aggregate X, and the other updates aggregate Y.

在单个服务中保持多个聚合之间一致性的另一种方法是欺骗和更新 事务中的多个聚合。例如,service 可以在单个事务中更新聚合。仅当使用支持丰富事务的数据库(如 RDBMS)时,才有可能做到这一点 型。如果您使用的是仅包含简单事务的 NoSQL 数据库,则除了使用 sagas 之外,别无选择。BYZ

An alternative approach to maintaining consistency across multiple aggregates within a single service is to cheat and update multiple aggregates within a transaction. For example, service B could update aggregates Y and Z in a single transaction. This is only possible when using a database, such as an RDBMS, that supports a rich transaction model. If you’re using a NoSQL database that only has simple transactions, there’s no other option except to use sagas.

或者有吗?事实证明,聚合界限并不是一成不变的。在开发域模型时,您可以选择 界限所在。但就像 20 世纪的殖民大国划定国界一样,你需要小心。

Or is there? It turns out that aggregate boundaries are not set in stone. When developing a domain model, you get to choose where the boundaries lie. But like a 20th century colonial power drawing national boundaries, you need to be careful.

5.2.4. 聚合粒度

5.2.4. Aggregate granularity

在开发域模型时,您必须做出的一个关键决策是每个聚合的大小。一方面,聚合 理想情况下应较小。由于对每个聚合的更新都是序列化的,因此更精细的聚合将增加 应用程序可以处理的同时请求数,从而提高可扩展性。它还将改善用户体验 ,因为它减少了两个用户尝试对同一聚合进行冲突更新的可能性。另一方面,因为 聚合是事务的 SCOPE,您可能需要定义一个更大的聚合才能使特定的更新成为原子的。

When developing a domain model, a key decision you must make is how large to make each aggregate. On one hand, aggregates should ideally be small. Because updates to each aggregate are serialized, more fine-grained aggregates will increase the number of simultaneous requests that the application can handle, improving scalability. It will also improve the user experience because it reduces the chance of two users attempting conflicting updates of the same aggregate. On the other hand, because an aggregate is the scope of transaction, you may need to define a larger aggregate in order to make a particular update atomic.

例如,前面我提到了如何在 FTGO 应用程序的域模型中 和 是单独的聚合。另一种设计是制作骨料的一部分。图 5.8 显示了这种替代设计。OrderConsumerOrderConsumer

For example, earlier I mentioned how in the FTGO application’s domain model Order and Consumer are separate aggregates. An alternative design is to make Order part of the Consumer aggregate. Figure 5.8 shows this alternative design.

图 5.8.另一种设计定义了一个包含 和 类的聚合。此设计使应用程序能够以原子方式更新其 a 及其一个或多个 .CustomerCustomerOrderConsumerOrders

这种更大的聚合的一个好处是,应用程序可以原子方式更新 a 及其一个或多个 .此方法的缺点是它会降低可伸缩性。更新同一客户的不同订单的交易记录 将被序列化。同样,如果两个用户尝试为同一客户编辑不同的订单,他们也会发生冲突。ConsumerConsumerOrders

A benefit of this larger Consumer aggregate is that the application can atomically update a Consumer and one or more of its Orders. A drawback of this approach is that it reduces scalability. Transactions that update different orders for the same customer would be serialized. Similarly, two users would conflict if they attempted to edit different orders for the same customer.

这种方法在微服务架构中的另一个缺点是它阻碍了分解。业务 例如,和 的逻辑必须并置在同一服务中,这会使服务更大。由于这些问题,制作聚合 尽可能细化是最好的。OrdersConsumers

Another drawback of this approach in a microservice architecture is that it is an obstacle to decomposition. The business logic for Orders and Consumers, for example, must be collocated in the same service, which makes the service larger. Because of these issues, making aggregates as fine-grained as possible is best.

5.2.5. 使用聚合设计业务逻辑

5.2.5. Designing business logic with aggregates

在典型的(微)服务中,大部分业务逻辑由聚合组成。业务逻辑的其余部分驻留 在域 Services 和 Sagas 中。saga 编排本地事务序列,以实施数据一致性。 这些服务是业务逻辑的入口点,由入站适配器调用。服务使用存储库 从数据库中检索聚合或将聚合保存到数据库。每个存储库都由出站 适配器。图 5.9 显示了 .Order Service

In a typical (micro)service, the bulk of the business logic consists of aggregates. The rest of the business logic resides in the domain services and the sagas. The sagas orchestrate sequences of local transactions in order to enforce data consistency. The services are the entry points into the business logic and are invoked by inbound adapters. A service uses a repository to retrieve aggregates from the database or save aggregates to the database. Each repository is implemented by an outbound adapter that accesses the database. Figure 5.9 shows the aggregate-based design of the business logic for the Order Service.

图 5.9.基于聚合的业务逻辑设计Order Service

业务逻辑由聚合、服务类、 以及一个或多个 sagas 组成。调用 以保存和加载 。对于服务本地的简单请求,该服务会更新聚合。如果更新请求跨越多个服务,则还将创建一个 saga,如第 4 章所述。OrderOrderServiceOrderRepositoryOrderServiceOrderRepositoryOrdersOrderOrderService

The business logic consists of the Order aggregate, the OrderService service class, the OrderRepository, and one or more sagas. The OrderService invokes the OrderRepository to save and load Orders. For simple requests that are local to the service, the service updates an Order aggregate. If an update request spans multiple services, the OrderService will also create a saga, as described in chapter 4.

我们将看一下代码,但首先,让我们研究一个与聚合密切相关的概念:域事件。

We’ll take a look at the code—but first, let’s examine a concept that’s closely related to aggregates: domain events.

5.3. 发布域事件

5.3. Publishing domain events

Merriam-Webster (https://www.merriam-webster.com/dictionary/event) 列出了事件一词的几个定义,包括:

Merriam-Webster (https://www.merriam-webster.com/dictionary/event) lists several definitions of the word event, including these:

  1. 发生的事情
  2. Something that happens
  3. 值得注意的事件
  4. A noteworthy happening
  5. 社交场合或活动
  6. A social occasion or activity
  7. 不良或破坏性的医疗事件、心脏病发作或其他心脏事件
  8. An adverse or damaging medical occurrence, a heart attack or other cardiac event

在 DDD 的上下文中,域事件是聚合发生的事件。它由域中的类表示 型。事件通常表示状态更改。例如,考虑 FTGO 应用程序中的聚合。其状态更改事件包括 、 、 等。如果有感兴趣的使用者,聚合可能会在每次进行状态转换时发布其中一个事件。OrderOrder CreatedOrder CancelledOrder ShippedOrder

In the context of DDD, a domain event is something that has happened to an aggregate. It’s represented by a class in the domain model. An event usually represents a state change. Consider, for example, an Order aggregate in the FTGO application. Its state-changing events include Order Created, Order Cancelled, Order Shipped, and so forth. An Order aggregate might, if there are interested consumers, publish one of the events each time it undergoes a state transition.

模式:域事件

聚合在创建域事件或发生其他一些重大更改时发布域事件。

An aggregate publishes a domain event when it’s created or undergoes some other significant change.

5.3.1. 为什么要发布更改事件?

5.3.1. Why publish change events?

域事件非常有用,因为其他方(用户、其他应用程序或同一应用程序中的其他组件)是 通常对了解聚合的状态更改感兴趣。以下是一些示例场景:

Domain events are useful because other parties—users, other applications, or other components within the same application—are often interested in knowing about an aggregate’s state changes. Here are some example scenarios:

  • 使用基于 Choreography 的 Sagas 维护服务之间的数据一致性,如第 4 章所述。
  • Maintaining data consistency across services using choreography-based sagas, described in chapter 4.
  • 通知维护副本的服务源数据已更改。这种方法称为命令查询责任 分离 (CQRS),第 7 章对此进行了介绍。
  • Notifying a service that maintains a replica that the source data has changed. This approach is known as Command Query Responsibility Segregation (CQRS), and it’s described in chapter 7.
  • 通过已注册的 Webhook 或通过消息代理通知其他应用程序,以触发 业务流程。
  • Notifying a different application via a registered webhook or via a message broker in order to trigger the next step in a business process.
  • 通知同一应用程序的不同组件,以便将 WebSocket 消息发送到用户的浏览器 或更新文本数据库,例如 ElasticSearch。
  • Notifying a different component of the same application in order, for example, to send a WebSocket message to a user’s browser or update a text database such as ElasticSearch.
  • 向用户发送通知(短信或电子邮件),通知他们订单已发货,以及他们的 Rx 处方 已准备好领取,或者其航班延误。
  • Sending notifications—text messages or emails—to users informing them that their order has shipped, their Rx prescription is ready for pick up, or their flight is delayed.
  • 监视域事件以验证应用程序是否正常运行。
  • Monitoring domain events to verify that the application is behaving correctly.
  • 分析事件以对用户行为进行建模。
  • Analyzing events to model user behavior.

在所有这些情况下,通知的触发器是应用程序数据库中聚合的状态更改。

The trigger for the notification in all these scenarios is the state change of an aggregate in an application’s database.

5.3.2. 什么是域事件?

5.3.2. What is a domain event?

域事件是其名称使用过去分词动词形成的类。它具有有意义地传达事件的属性。每个属性 是基元值或值对象。例如,事件类具有 property.OrderCreatedorderId

A domain event is a class with a name formed using a past-participle verb. It has properties that meaningfully convey the event. Each property is either a primitive value or a value object. For example, an OrderCreated event class has an orderId property.

域事件通常还具有元数据,例如事件 ID 和时间戳。它还可能具有 user 进行更改,因为这对审计很有用。元数据可以是事件对象的一部分,可能是定义的 在超类中。或者,事件元数据可以位于包装事件对象的 envelope 对象中。的 ID 发出事件的 aggregate 也可能是 envelope 的一部分,而不是 explicit 事件属性。

A domain event typically also has metadata, such as the event ID, and a timestamp. It might also have the identity of the user who made the change, because that’s useful for auditing. The metadata can be part of the event object, perhaps defined in a superclass. Alternatively, the event metadata can be in an envelope object that wraps the event object. The ID of the aggregate that emitted the event might also be part of the envelope rather than an explicit event property.

该事件是域事件的一个示例。它没有任何字段,因为 Order 的 ID 是事件信封的一部分。 下面的清单显示了 event 类和类。OrderCreatedOrderCreatedDomainEventEnvelope

The OrderCreated event is an example of a domain event. It doesn’t have any fields, because the Order’s ID is part of the event envelope. The following listing shows the OrderCreated event class and the DomainEventEnvelope class.

清单 5.1.事件和类OrderCreatedDomainEventEnvelope
interface DomainEvent {}

interface OrderDomainEvent extends DomainEvent {}

class OrderCreated implements OrderDomainEvent {}

class DomainEventEnvelope<T extends DomainEvent> {
  private String aggregateType;                        1
  private Object aggregateId;
  private T event;
  ...
}
interface DomainEvent {}

interface OrderDomainEvent extends DomainEvent {}

class OrderCreated implements OrderDomainEvent {}

class DomainEventEnvelope<T extends DomainEvent> {
  private String aggregateType;                        1
  private Object aggregateId;
  private T event;
  ...
}

  • 1 事件的元数据
  • 1 The event’s metadata

该接口是一个标记接口,用于将类标识为域事件。 是事件(如 )的标记接口,这些事件由聚合发布。它是一个包含事件元数据和事件对象的类。它是一个泛型类,由 domain 事件参数化 类型。DomainEventOrderDomainEventOrderCreatedOrderDomainEventEnvelope

The DomainEvent interface is a marker interface that identifies a class as a domain event. OrderDomainEvent is a marker interface for events, such as OrderCreated, which are published by the Order aggregate. The DomainEventEnvelope is a class that contains event metadata and the event object. It’s a generic class that’s parameterized by the domain event type.

5.3.3. 事件扩充

5.3.3. Event enrichment

例如,假设您正在编写一个处理事件的事件使用者。前面显示的 event 类捕获了所发生情况的本质。但是,您的事件使用者在处理事件时可能需要 order 详细信息。一种选择是让它从 .事件使用者在服务中查询聚合的缺点是,它会产生服务请求的开销。OrderOrderCreatedOrderCreatedOrderService

Let’s imagine, for example, that you’re writing an event consumer that processes Order events. The OrderCreated event class shown previously captures the essence of what has happened. But your event consumer may need the order details when processing an OrderCreated event. One option is for it to retrieve that information from the OrderService. The drawback of an event consumer querying the service for the aggregate is that it incurs the overhead of a service request.

另一种称为事件扩充的方法是让事件包含使用者需要的信息。它简化了事件使用者,因为他们不再需要请求 来自发布事件的服务的数据。在该事件中,聚合可以通过包含订单详细信息来丰富事件。以下清单显示了 enriched 事件。OrderCreatedOrder

An alternative approach known as event enrichment is for events to contain information that consumers need. It simplifies event consumers because they no longer need to request that data from the service that published the event. In the OrderCreated event, the Order aggregate can enrich the event by including the order details. The following listing shows the enriched event.

清单 5.2.丰富的事件OrderCreated
class OrderCreated implements OrderEvent {
  private List<OrderLineItem> lineItems;
  private DeliveryInformation deliveryInformation;       1
  private PaymentInformation paymentInformation;
  private long restaurantId;
  private String restaurantName;
  ...
}
class OrderCreated implements OrderEvent {
  private List<OrderLineItem> lineItems;
  private DeliveryInformation deliveryInformation;       1
  private PaymentInformation paymentInformation;
  private long restaurantId;
  private String restaurantName;
  ...
}

  • 1 消费者通常需要的数据
  • 1 Data that its consumers typically need

因为这个版本的事件包含顺序详细信息,所以事件使用者,比如(在第 7 章中讨论的)在处理事件时不再需要获取该数据。OrderCreatedOrder History ServiceOrderCreated

Because this version of the OrderCreated event contains the order details, an event consumer, such as the Order History Service (discussed in chapter 7) no longer needs to fetch that data when processing an OrderCreated event.

尽管事件扩充简化了使用者,但缺点是它可能会使事件类不太稳定。一个事件 每当其消费者的需求发生变化时,class 就可能需要改变。这会降低可维护性,因为 这种更改可能会影响应用程序的多个部分。让每个消费者满意也可能是徒劳的。幸运 在许多情况下,事件中要包含哪些属性是相当明显的。

Although event enrichment simplifies consumers, the drawback is that it risks making the event classes less stable. An event class potentially needs to change whenever the requirements of its consumers change. This can reduce maintainability because this kind of change can impact multiple parts of the application. Satisfying every consumer can also be a futile effort. Fortunately, in many situations it’s fairly obvious which properties to include in an event.

现在我们已经介绍了域事件的基础知识,让我们看看如何发现它们。

Now that we’ve covered the basics of domain events, let’s look at how to discover them.

5.3.4. 识别域事件

5.3.4. Identifying domain events

有几种不同的策略可用于识别域事件。通常,要求将描述通知的场景 是必需的。要求可能包括诸如“When X happens do Y”之类的语言。例如,FTGO 中的一个要求 application 是“下订单时,向消费者发送电子邮件”。对通知的要求表明存在 域事件中。

There are a few different strategies for identifying domain events. Often the requirements will describe scenarios where notifications are required. The requirements might include language such as “When X happens do Y.” For example, one requirement in the FTGO application is “When an Order is placed send the consumer an email.” A requirement for a notification suggests the existence of a domain event.

另一种越来越受欢迎的方法是使用事件风暴。事件风暴是一种以事件为中心的研讨会形式,用于了解复杂领域。它涉及将领域专家聚集在一个房间里,很多 便利贴,以及一个非常大的表面(白板或纸卷)来粘贴笔记。事件风暴的结果是 一个以事件为中心的域模型,由聚合和事件组成。

Another approach, which is increasing in popularity, is to use event storming. Event storming is an event-centric workshop format for understanding a complex domain. It involves gathering domain experts in a room, lots of sticky notes, and a very large surface—a whiteboard or paper roll—to stick the notes on. The result of event storming is an event-centric domain model consisting of aggregates and events.

事件风暴包括三个主要步骤:

Event storming consist of three main steps:

  1. 头脑风暴活动请领域专家集思广益领域事件。域事件由放置的橙色便签表示 在建模表面上的粗略时间轴中。
  2. Brainstorm eventsAsk the domain experts to brainstorm the domain events. Domain events are represented by orange sticky notes that are laid out in a rough timeline on the modeling surface.
  3. 确定事件触发器 - 请域专家确定每个事件的触发器,即以下事件之一:

    • 用户操作,表示为使用蓝色便笺的命令
    • 外部系统,由紫色便签表示
    • 另一个域事件
    • 时间的流逝
  4. Identify event triggersAsk the domain experts to identify the trigger of each event, which is one of the following:

    • User actions, represented as a command using a blue sticky note
    • External system, represented by a purple sticky note
    • Another domain event
    • Passing of time
  5. 标识聚合 - 请域专家确定使用每个命令并发出相应事件的聚合。集 料 由黄色便笺表示。
  6. Identify aggregatesAsk the domain experts to identify the aggregate that consumes each command and emits the corresponding event. Aggregates are represented by yellow sticky notes.

图 5.10 显示了事件风暴研讨会的结果。在短短几个小时内,参与者识别了大量域事件, 命令和聚合。这是创建域模型过程中的良好第一步。

Figure 5.10 shows the result of an event-storming workshop. In just a couple of hours, the participants identified numerous domain events, commands, and aggregates. It was a good first step in the process of creating a domain model.

图 5.10.这是持续了几个小时的 Event Storm 研讨会的结果。便签是事件,它们被布置成 时间表;命令,表示用户操作;和 aggregates,它们发出事件以响应命令。

事件风暴是快速创建域模型的有用技术。

Event storming is a useful technique for quickly creating a domain model.

现在我们已经介绍了域事件的基础知识,让我们看看生成和发布它们的机制。

Now that we’ve covered the basics of domain events, let’s look at the mechanics of generating and publishing them.

5.3.5. 生成和发布域事件

5.3.5. Generating and publishing domain events

使用域事件进行通信是异步消息传递的一种形式,将在第 3 章中讨论。但是,在业务逻辑可以将它们发布到消息代理之前,它必须先创建它们。让我们看看如何做到这一点。

Communicating using domain events is a form of asynchronous messaging, discussed in chapter 3. But before the business logic can publish them to a message broker, it must first create them. Let’s look at how to do that.

生成域事件

从概念上讲,域事件由聚合发布。聚合知道其状态何时更改,因此知道要更改的事件 发布。聚合可以直接调用消息收发 API。这种方法的缺点是,因为聚合不能 使用依赖关系注入,则消息收发 API 需要作为方法参数传递。这将使基础设施交织在一起 关注点和业务逻辑,这是非常不可取的。

Conceptually, domain events are published by aggregates. An aggregate knows when its state changes and hence what event to publish. An aggregate could invoke a messaging API directly. The drawback of this approach is that because aggregates can’t use dependency injection, the messaging API would need to be passed around as a method argument. That would intertwine infrastructure concerns and business logic, which is extremely undesirable.

更好的方法是在聚合和调用它的服务 (或等效类) 之间分配责任。 服务可以使用依赖关系注入来获取对消息收发 API 的引用,从而轻松发布事件。聚合 每当其状态发生变化时生成事件,并将其返回给服务。聚合有几种不同的方式 可以将事件返回给服务。一个选项是让 aggregate 方法的返回值包含事件列表。 例如,下面的清单显示了聚合的方法如何将 a 返回给其调用者。Ticketaccept()TicketAcceptedEvent

A better approach is to split responsibility between the aggregate and the service (or equivalent class) that invokes it. Services can use dependency injection to obtain a reference to the messaging API, easily publishing events. The aggregate generates the events whenever its state changes and returns them to the service. There are a couple of different ways an aggregate can return events back to the service. One option is for the return value of an aggregate method to include a list of events. For example, the following listing shows how a Ticket aggregate’s accept() method can return a TicketAcceptedEvent to its caller.

清单 5.3.聚合的方法Ticketaccept()
public class Ticket {

   public List<DomainEvent> accept(ZonedDateTime readyBy) {
    ...
    this.acceptTime = ZonedDateTime.now();                       1
    this.readyBy = readyBy;
    return singletonList(new TicketAcceptedEvent(readyBy));      2
   }
}
public class Ticket {

   public List<DomainEvent> accept(ZonedDateTime readyBy) {
    ...
    this.acceptTime = ZonedDateTime.now();                       1
    this.readyBy = readyBy;
    return singletonList(new TicketAcceptedEvent(readyBy));      2
   }
}

  • 1 更新工单
  • 1 Updates the Ticket
  • 2 返回一个事件
  • 2 Returns an event

该服务调用聚合根的方法,然后发布事件。例如,下面的清单显示了如何调用和发布事件。KitchenServiceTicket.accept()

The service invokes the aggregate root’s method, and then publishes the events. For example, the following listing shows how KitchenService invokes Ticket.accept() and publishes the events.

清单 5.4. 调用KitchenServiceTicket.accept()
public class KitchenService {

  @Autowired
  private TicketRepository ticketRepository;

  @Autowired
  private DomainEventPublisher domainEventPublisher;

  public void accept(long ticketId, ZonedDateTime readyBy) {
    Ticket ticket =
          ticketRepository.findById(ticketId)
            .orElseThrow(() ->
                      new TicketNotFoundException(ticketId));
    List<DomainEvent> events = ticket.accept(readyBy);
    domainEventPublisher.publish(Ticket.class, orderId, events);      1
  }
public class KitchenService {

  @Autowired
  private TicketRepository ticketRepository;

  @Autowired
  private DomainEventPublisher domainEventPublisher;

  public void accept(long ticketId, ZonedDateTime readyBy) {
    Ticket ticket =
          ticketRepository.findById(ticketId)
            .orElseThrow(() ->
                      new TicketNotFoundException(ticketId));
    List<DomainEvent> events = ticket.accept(readyBy);
    domainEventPublisher.publish(Ticket.class, orderId, events);      1
  }

  • 1 发布域事件
  • 1 Publishes domain events

该方法首先调用 the 以从数据库加载 。然后,它通过调用 . 然后发布由调用 返回的事件,稍后会描述。accept()TicketRepositoryTicketTicketaccept()KitchenServiceTicketDomainEventPublisher.publish()

The accept() method first invokes the TicketRepository to load the Ticket from the database. It then updates the Ticket by calling accept(). KitchenService then publishes events returned by Ticket by calling DomainEventPublisher.publish(), described shortly.

这种方法非常简单。否则具有 void 返回类型的方法现在返回 .唯一可能的缺点是 non-void 方法的 return 类型现在更加复杂。它们必须返回一个对象 包含原始返回值和 .您很快就会看到这种方法的示例。List<Event>List<Event>

This approach is quite simple. Methods that would otherwise have a void return type now return List<Event>. The only potential drawback is that the return type of non-void methods is now more complex. They must return an object containing the original return value and List<Event>. You’ll see an example of such a method soon.

另一个选项是让聚合根在字段中累积事件。然后,该服务检索事件并发布 他们。例如,下面的清单显示了以这种方式工作的类的变体。Ticket

Another option is for the aggregate root to accumulate events in a field. The service then retrieves the events and publishes them. For example, the following listing shows a variant of the Ticket class that works this way.

清单 5.5.它扩展了一个超类,该超类记录域事件Ticket
public class Ticket extends AbstractAggregateRoot {

  public void accept(ZonedDateTime readyBy) {
    ...
    this.acceptTime = ZonedDateTime.now();
    this.readyBy = readyBy;
    registerDomainEvent(new TicketAcceptedEvent(readyBy));
  }

}
public class Ticket extends AbstractAggregateRoot {

  public void accept(ZonedDateTime readyBy) {
    ...
    this.acceptTime = ZonedDateTime.now();
    this.readyBy = readyBy;
    registerDomainEvent(new TicketAcceptedEvent(readyBy));
  }

}

Ticketextends 定义记录事件的方法。服务将调用以检索这些事件。AbstractAggregateRootregisterDomainEvent()AbstractAggregateRoot.getDomainEvents()

Ticket extends AbstractAggregateRoot, which defines a registerDomainEvent() method that records the event. A service would call AbstractAggregateRoot.getDomainEvents() to retrieve those events.

我更喜欢第一个选项:将事件返回给服务的方法。但是在聚合中累积事件 root 也是一个可行的选项。实际上,Spring Data Ingalls release train (https://spring.io/blog/2017/01/30/what-s-new-in-spring-data-release-ingalls) 实现了一种自动将事件发布到 Spring 的机制。主要缺点是,为了减少代码重复,聚合根应该扩展一个超类,例如 ,这可能与扩展其他一些超类的要求相冲突。另一个问题是,尽管聚合根的方法很容易调用 ,但聚合中其他类中的方法会发现它具有挑战性。他们很可能需要以某种方式传递事件 添加到聚合根。ApplicationContextAbstractAggregateRootregisterDomainEvent()

My preference is for the first option: the method returning events to the service. But accumulating events in the aggregate root is also a viable option. In fact, the Spring Data Ingalls release train (https://spring.io/blog/2017/01/30/what-s-new-in-spring-data-release-ingalls) implements a mechanism that automatically publishes events to the Spring ApplicationContext. The main drawback is that to reduce code duplication, aggregate roots should extend a superclass such as AbstractAggregateRoot, which might conflict with a requirement to extend some other superclass. Another issue is that although it’s easy for the aggregate root’s methods to call registerDomainEvent(), methods in other classes in the aggregate would find it challenging. They would mostly likely need to somehow pass the events to the aggregate root.

如何可靠地发布域事件?

第 3 章讨论了如何可靠地发送消息作为本地数据库事务的一部分。域事件也不例外。服务 必须使用事务型消息传递来发布事件,以确保它们作为更新的事务的一部分发布 数据库中的聚合。第 3 章中描述的 Eventuate Tram 框架实现了这种机制。它将事件作为更新数据库的 ACID 事务的一部分插入到表中。事务提交后,插入的事件 发布到表中,然后发布到 Message Broker。OUTBOXOUTBOX

Chapter 3 talks about how to reliably send messages as part of a local database transaction. Domain events are no different. A service must use transactional messaging to publish events to ensure that they’re published as part of the transaction that updates the aggregate in the database. The Eventuate Tram framework, described in chapter 3, implements such a mechanism. It insert events into an OUTBOX table as part of the ACID transaction that updates the database. After the transaction commits, the events that were inserted into the OUTBOX table are then published to the message broker.

该框架提供了一个接口,如下面的清单所示。它定义了几个重载方法,这些方法将聚合类型和 ID 作为参数,以及域事件列表。TramDomainEventPublisherpublish()

The Tram framework provides a DomainEventPublisher interface, shown in the following listing. It defines several overloaded publish() methods that take the aggregate type and ID as parameters, along with a list of domain events.

清单 5.6.Eventuate Tram 框架的接口DomainEventPublisher
public interface DomainEventPublisher {
 void publish(String aggregateType, Object aggregateId,
     List<DomainEvent> domainEvents);
public interface DomainEventPublisher {
 void publish(String aggregateType, Object aggregateId,
     List<DomainEvent> domainEvents);

它使用 Eventuate Tram 框架的接口以事务方式发布这些事件。MessageProducer

It uses the Eventuate Tram framework’s MessageProducer interface to publish those events transactionally.

服务可以直接调用发布者。但这样做的一个缺点是,它不能确保服务只发布有效的事件。,例如,应仅发布实现 的事件,这是聚合事件的标记接口。更好的选择是让服务实现 的子类 ,如清单 5.7 所示。 是一个抽象类,它为发布域事件提供类型安全的接口。它是一个泛型类,具有两个 类型参数、 聚合类型 和 、 域事件的标记接口类型。服务通过调用方法发布事件,该方法具有两个参数:类型的聚合和类型的事件列表。DomainEventPublisherKitchenServiceTicketDomainEventTicketAbstractAggregateDomainEventPublisherAbstractAggregateDomainEventPublisherAEpublish()AE

A service could call the DomainEventPublisher publisher directly. But one drawback of doing so is that it doesn’t ensure that a service only publishes valid events. KitchenService, for example, should only publish events that implement TicketDomainEvent, which is the marker interface for the Ticket aggregate’s events. A better option is for services to implement a subclass of AbstractAggregateDomainEventPublisher, which is shown in listing 5.7. AbstractAggregateDomainEventPublisher is an abstract class that provides a type-safe interface for publishing domain events. It’s a generic class that has two type parameters, A, the aggregate type, and E, the marker interface type for the domain events. A service publishes events by calling the publish() method, which has two parameters: an aggregate of type A and a list of events of type E.

清单 5.7.类型安全域事件发布者的抽象超类
public abstract class AbstractAggregateDomainEventPublisher<A, E extends Doma
     inEvent> {
  private Function<A, Object> idSupplier;
  private DomainEventPublisher eventPublisher;
  private Class<A> aggregateType;

  protected AbstractAggregateDomainEventPublisher(
     DomainEventPublisher eventPublisher,
     Class<A> aggregateType,
     Function<A, Object> idSupplier) {
    this.eventPublisher = eventPublisher;
    this.aggregateType = aggregateType;
    this.idSupplier = idSupplier;
  }

  public void publish(A aggregate, List<E> events) {
    eventPublisher.publish(aggregateType, idSupplier.apply(aggregate),
     (List<DomainEvent>) events);
  }

}
public abstract class AbstractAggregateDomainEventPublisher<A, E extends Doma
     inEvent> {
  private Function<A, Object> idSupplier;
  private DomainEventPublisher eventPublisher;
  private Class<A> aggregateType;

  protected AbstractAggregateDomainEventPublisher(
     DomainEventPublisher eventPublisher,
     Class<A> aggregateType,
     Function<A, Object> idSupplier) {
    this.eventPublisher = eventPublisher;
    this.aggregateType = aggregateType;
    this.idSupplier = idSupplier;
  }

  public void publish(A aggregate, List<E> events) {
    eventPublisher.publish(aggregateType, idSupplier.apply(aggregate),
     (List<DomainEvent>) events);
  }

}

该方法检索聚合的 ID 并调用 .以下清单显示了 ,它发布聚合的域事件。publish()DomainEventPublisher.publish()TicketDomainEventPublisherTicket

The publish() method retrieves the aggregate’s ID and invokes DomainEventPublisher.publish(). The following listing shows the TicketDomainEventPublisher, which publishes domain events for the Ticket aggregate.

清单 5.8.用于发布聚合的域事件的类型安全接口Ticket
public class TicketDomainEventPublisher extends
     AbstractAggregateDomainEventPublisher<Ticket, TicketDomainEvent> {

  public TicketDomainEventPublisher(DomainEventPublisher eventPublisher) {
    super(eventPublisher, Ticket.class, Ticket::getId);
  }

}
public class TicketDomainEventPublisher extends
     AbstractAggregateDomainEventPublisher<Ticket, TicketDomainEvent> {

  public TicketDomainEventPublisher(DomainEventPublisher eventPublisher) {
    super(eventPublisher, Ticket.class, Ticket::getId);
  }

}

此类仅发布属于 的子类的事件。TicketDomainEvent

This class only publishes events that are a subclass of TicketDomainEvent.

现在我们已经了解了如何发布域事件,让我们看看如何使用它们。

Now that we’ve looked at how to publish domain events, let’s see how to consume them.

5.3.6. 使用域事件

5.3.6. Consuming domain events

域事件最终作为消息发布到消息代理,例如 Apache Kafka。使用者可以使用代理的 client API 直接访问。但是使用更高级别的 API 更方便,例如 Eventuate Tram 框架的 ,如第 3 章所述。A 将域事件调度到相应的 handle 方法。清单 5.9 显示了一个示例事件处理程序类。 订阅 每当餐厅的菜单更新时发布的事件。它负责使 的数据副本保持最新。DomainEventDispatcherDomainEventDispatcherKitchenServiceEventConsumerRestaurant ServiceKitchen Service

Domain events are ultimately published as messages to a message broker, such as Apache Kafka. A consumer could use the broker’s client API directly. But it’s more convenient to use a higher-level API such as the Eventuate Tram framework’s DomainEventDispatcher, described in chapter 3. A DomainEventDispatcher dispatches domain events to the appropriate handle method. Listing 5.9 shows an example event handler class. KitchenServiceEventConsumer subscribes to events published by Restaurant Service whenever a restaurant’s menu is updated. It’s responsible for keeping Kitchen Service’s replica of the data up-to-date.

清单 5.9.将事件调度到事件处理程序方法
public class KitchenServiceEventConsumer {
  @Autowired
  private RestaurantService restaurantService;

  public DomainEventHandlers domainEventHandlers() {                         1
     return DomainEventHandlersBuilder
      .forAggregateType("net.chrisrichardson.ftgo.restaurantservice.Restaurant")
      .onEvent(RestaurantMenuRevised.class, this::reviseMenu)
      .build();
  }

  public void reviseMenu(DomainEventEnvelope<RestaurantMenuRevised> de) {    2
    long id = Long.parseLong(de.getAggregateId());
    RestaurantMenu revisedMenu = de.getEvent().getRevisedMenu();
    restaurantService.reviseMenu(id, revisedMenu);
  }

}
public class KitchenServiceEventConsumer {
  @Autowired
  private RestaurantService restaurantService;

  public DomainEventHandlers domainEventHandlers() {                         1
     return DomainEventHandlersBuilder
      .forAggregateType("net.chrisrichardson.ftgo.restaurantservice.Restaurant")
      .onEvent(RestaurantMenuRevised.class, this::reviseMenu)
      .build();
  }

  public void reviseMenu(DomainEventEnvelope<RestaurantMenuRevised> de) {    2
    long id = Long.parseLong(de.getAggregateId());
    RestaurantMenu revisedMenu = de.getEvent().getRevisedMenu();
    restaurantService.reviseMenu(id, revisedMenu);
  }

}

  • 1 将事件映射到事件处理程序
  • 1 Maps events to event handlers
  • 2 RestaurantMenuRevised 事件的事件处理程序
  • 2 An event handler for the RestaurantMenuRevised event

该方法处理事件。它调用 ,这将更新餐厅的菜单。该方法返回域事件列表,这些事件由事件处理程序发布。reviseMenu()RestaurantMenuRevisedrestaurantService.reviseMenu()

The reviseMenu() method handles RestaurantMenuRevised events. It calls restaurantService.reviseMenu(), which updates the restaurant’s menu. That method returns a list of domain events, which are published by the event handler.

现在我们已经了解了聚合和域事件,是时候考虑一些已实现的示例业务逻辑了 使用聚合。

Now that we’ve looked at aggregates and domain events, it’s time to consider some example business logic that’s implemented using aggregates.

5.4. Kitchen Service 业务逻辑

5.4. Kitchen Service business logic

第一个示例是 ,它使餐厅能够管理其订单。此服务中的两个主要聚合是 和 聚合。聚合知道餐厅的菜单和营业时间,并且可以验证订单。A 表示餐厅必须准备供快递员取货的订单。图 5.11 显示了这些聚合和服务业务逻辑的其他关键部分,以及服务的适配器。Kitchen ServiceRestaurantTicketRestaurantTicket

The first example is Kitchen Service, which enables a restaurant to manage their orders. The two main aggregates in this service are the Restaurant and Ticket aggregates. The Restaurant aggregate knows the restaurant’s menu and opening hours and can validate orders. A Ticket represents an order that a restaurant must prepare for pickup by a courier. Figure 5.11 shows these aggregates and other key parts of the service’s business logic, as well as the service’s adapters.

图 5.11.的设计Kitchen Service

除了聚合之外,业务逻辑的其他主要部分是 、 和 。 是业务逻辑的条目。它定义了用于创建和更新 and 聚合的方法。 并分别定义用于持久化 和 的方法。Kitchen ServiceKitchenServiceTicketRepositoryRestaurantRepositoryKitchenServiceRestaurantTicketTicketRepositoryRestaurantRepositoryTicketsRestaurants

In addition to the aggregates, the other main parts of Kitchen Service’s business logic are KitchenService, TicketRepository, and RestaurantRepository. KitchenService is the business logic’s entry. It defines methods for creating and updating the Restaurant and Ticket aggregates. TicketRepository and RestaurantRepository define methods for persisting Tickets and Restaurants respectively.

该服务有三个入站适配器:Kitchen Service

The Kitchen Service service has three inbound adapters:

  • REST API由餐厅工作人员使用的用户界面调用的 REST API。它调用 create 和 update 。KitchenServiceTickets
  • REST APIThe REST API invoked by the user interface used by workers at the restaurant. It invokes KitchenService to create and update Tickets.
  • KitchenServiceCommandHandler— 由 sagas 调用的基于异步请求/响应的 API。它调用 create 和 update 。KitchenServiceTickets
  • KitchenServiceCommandHandlerThe asynchronous request/response-based API that’s invoked by sagas. It invokes KitchenService to create and update Tickets.
  • KitchenServiceEventConsumer订阅 发布的事件。它调用 create 和 update 。Restaurant ServiceKitchenServiceRestaurants
  • KitchenServiceEventConsumerSubscribes to events published by Restaurant Service. It invokes KitchenService to create and update Restaurants.

该服务还具有两个出站适配器:

The service also has two outbound adapters:

  • DB 适配器实现 和 接口并访问数据库。TicketRepositoryRestaurantRepository
  • DB adapterImplements the TicketRepository and the RestaurantRepository interfaces and accesses the database.
  • DomainEventPublishingAdapter - 实现接口并发布域事件。DomainEventPublisherTicket
  • DomainEventPublishingAdapterImplements the DomainEventPublisher interface and publishes Ticket domain events.

让我们仔细看看 的设计,从聚合开始。KitchenServiceTicket

Let’s take a closer look at the design of KitchenService, starting with the Ticket aggregate.

5.4.1. Ticket 聚合

5.4.1. The Ticket aggregate

Ticket是 的聚合之一。如第 2 章所述,当讨论 Bounded Context 的概念时,这个聚合表示餐厅厨房的订单视图。 它不包含有关使用者的信息,例如他们的身份、送货信息或付款详细信息。它专注于使餐厅的厨房能够做好准备 的 for pickup。此外,不会为此聚合生成唯一 ID。相反,它使用 提供的 ID。Kitchen ServiceOrderKitchenServiceOrderService

Ticket is one of the aggregates of Kitchen Service. As described in chapter 2, when talking about the concept of a Bounded Context, this aggregate represents the restaurant kitchen’s view of an order. It doesn’t contain information about the consumer, such as their identity, the delivery information, or payment details. It’s focused on enabling a restaurant’s kitchen to prepare the Order for pickup. Moreover, KitchenService doesn’t generate a unique ID for this aggregate. Instead, it uses the ID supplied by OrderService.

让我们首先看一下这个类的结构,然后我们来研究它的方法。

Let’s first look at the structure of this class and then we’ll examine its methods.

Ticket 类的结构

下面的清单显示了此类的代码摘录。该类类似于传统的域类。主要区别在于,对其他聚合的引用是按 primary 的 钥匙。Ticket

The following listing shows an excerpt of the code for this class. The Ticket class is similar to a traditional domain class. The main difference is that references to other aggregates are by primary key.

清单 5.10.类的一部分,它是一个 JPA 实体Ticket
@Entity(table="tickets")
public class Ticket {

  @Id
  private Long id;
  private TicketState state;
  private Long restaurantId;

  @ElementCollection
  @CollectionTable(name="ticket_line_items")
  private List<TicketLineItem> lineItems;

  private ZonedDateTime readyBy;
  private ZonedDateTime acceptTime;
  private ZonedDateTime preparingTime;
  private ZonedDateTime pickedUpTime;
  private ZonedDateTime readyForPickupTime;
  ...
@Entity(table="tickets")
public class Ticket {

  @Id
  private Long id;
  private TicketState state;
  private Long restaurantId;

  @ElementCollection
  @CollectionTable(name="ticket_line_items")
  private List<TicketLineItem> lineItems;

  private ZonedDateTime readyBy;
  private ZonedDateTime acceptTime;
  private ZonedDateTime preparingTime;
  private ZonedDateTime pickedUpTime;
  private ZonedDateTime readyForPickupTime;
  ...

此类使用 JPA 持久化,并映射到表。该字段是对 的 而不是对象引用。该字段存储订单何时可以提货的估计值。该类具有多个跟踪订单历史记录的字段,包括 、 和 。让我们看看这个类的方法。TICKETSrestaurantIdLongRestaurantreadyByTicketacceptTimepreparingTimepickupTime

This class is persisted with JPA and is mapped to the TICKETS table. The restaurantId field is a Long rather than an object reference to a Restaurant. The readyBy field stores the estimate of when the order will be ready for pickup. The Ticket class has several fields that track the history of the order, including acceptTime, preparingTime, and pickupTime. Let’s look at this class’s methods.

Ticket 聚合的行为

聚合定义了多种方法。如前所述,它有一个 static 方法,它是一个工厂方法,用于创建 .当餐厅更新订单的状态时,还会调用一些方法:Ticketcreate()Ticket

The Ticket aggregate defines several methods. As you saw earlier, it has a static create() method, which is a factory method that creates a Ticket. There are also some methods that are invoked when the restaurant updates the state of the order:

  • accept()餐厅已接受订单。
  • accept()The restaurant has accepted the order.
  • preparing()餐厅已开始准备订单,这意味着订单无法再更改或取消。
  • preparing()The restaurant has started preparing the order, which means the order can no longer be changed or cancelled.
  • readyForPickup()现在可以提取订单。
  • readyForPickup()The order can now be picked up.

下面的清单显示了它的一些方法。

The following listing shows some of its methods.

清单 5.11.的一些 方法Ticket
public class Ticket {

public static ResultWithAggregateEvents<Ticket, TicketDomainEvent>
     create(Long id, TicketDetails details) {
  return new ResultWithAggregateEvents<>(new Ticket(id, details), new
     TicketCreatedEvent(id, details));
}

public List<TicketPreparationStartedEvent> preparing() {
  switch (state) {
    case ACCEPTED:
      this.state = TicketState.PREPARING;
      this.preparingTime = ZonedDateTime.now();
      return singletonList(new TicketPreparationStartedEvent());
    default:
      throw new UnsupportedStateTransitionException(state);
  }
}

public List<TicketDomainEvent> cancel() {
    switch (state) {
      case CREATED:
      case ACCEPTED:
        this.state = TicketState.CANCELLED;
        return singletonList(new TicketCancelled());
      case READY_FOR_PICKUP:
        throw new TicketCannotBeCancelledException();

      default:
        throw new UnsupportedStateTransitionException(state);

    }
  }
public class Ticket {

public static ResultWithAggregateEvents<Ticket, TicketDomainEvent>
     create(Long id, TicketDetails details) {
  return new ResultWithAggregateEvents<>(new Ticket(id, details), new
     TicketCreatedEvent(id, details));
}

public List<TicketPreparationStartedEvent> preparing() {
  switch (state) {
    case ACCEPTED:
      this.state = TicketState.PREPARING;
      this.preparingTime = ZonedDateTime.now();
      return singletonList(new TicketPreparationStartedEvent());
    default:
      throw new UnsupportedStateTransitionException(state);
  }
}

public List<TicketDomainEvent> cancel() {
    switch (state) {
      case CREATED:
      case ACCEPTED:
        this.state = TicketState.CANCELLED;
        return singletonList(new TicketCancelled());
      case READY_FOR_PICKUP:
        throw new TicketCannotBeCancelledException();

      default:
        throw new UnsupportedStateTransitionException(state);

    }
  }

该方法创建一个 .当餐厅开始准备订单时,将调用该方法。它将订单的状态更改为 ,记录时间,并发布事件。当用户尝试取消订单时,将调用该方法。如果允许取消,此方法会更改 Order 并返回一个事件。否则,它会引发异常。调用这些方法是为了响应 REST API 请求 以及事件和命令消息。让我们看看调用聚合方法的类。create()Ticketpreparing()PREPARINGcancel()

The create() method creates a Ticket. The preparing() method is called when the restaurant starts preparing the order. It changes the state of the order to PREPARING, records the time, and publishes an event. The cancel() method is called when a user attempts to cancel an order. If the cancellation is allowed, this method changes the state of the order and returns an event. Otherwise, it throws an exception. These methods are invoked in response to REST API requests as well as events and command messages. Let’s look at the classes that invoke the aggregate’s method.

KitchenService 域服务

KitchenService由服务的入站适配器调用。它定义了更改订单状态的各种方法,包括 、 、 等。每个方法都会加载指定的聚合,在聚合根上调用相应的方法,然后发布 any domain 事件。下面的清单显示了它的方法。accept()reject()preparing()accept()

KitchenService is invoked by the service’s inbound adapters. It defines various methods for changing the state of an order, including accept(), reject(), preparing(), and others. Each method loads the specifies aggregate, calls the corresponding method on the aggregate root, and publishes any domain events. The following listing shows its accept() method.

清单 5.12.服务的方法更新accept()Ticket
public class KitchenService {

  @Autowired
  private TicketRepository ticketRepository;

  @Autowired
  private TicketDomainEventPublisher domainEventPublisher;

  public void accept(long ticketId, ZonedDateTime readyBy) {
    Ticket ticket =
          ticketRepository.findById(ticketId)
            .orElseThrow(() ->
                      new TicketNotFoundException(ticketId));
    List<TicketDomainEvent> events = ticket.accept(readyBy);
    domainEventPublisher.publish(ticket, events);                1
  }

}
public class KitchenService {

  @Autowired
  private TicketRepository ticketRepository;

  @Autowired
  private TicketDomainEventPublisher domainEventPublisher;

  public void accept(long ticketId, ZonedDateTime readyBy) {
    Ticket ticket =
          ticketRepository.findById(ticketId)
            .orElseThrow(() ->
                      new TicketNotFoundException(ticketId));
    List<TicketDomainEvent> events = ticket.accept(readyBy);
    domainEventPublisher.publish(ticket, events);                1
  }

}

  • 1 发布域事件
  • 1 Publish domain events

当餐厅接受新订单时,将调用该方法。它有两个参数:accept()

The accept() method is invoked when the restaurant accepts a new order. It has two parameters:

  • orderId要接受的订单的 ID
  • orderIdID of the order to accept
  • readyBy预计订单可以取货的时间
  • readyByEstimated time when the order will be ready for pickup

此方法检索聚合并调用其方法。它发布任何生成的事件。Ticketaccept()

This method retrieves the Ticket aggregate and calls its accept() method. It publishes any generated events.

现在让我们看看处理异步命令的类。

Now let’s look at the class that handles asynchronous commands.

KitchenServiceCommandHandler 类

该类是一个适配器,负责处理由 .此类为每个命令定义一个处理程序方法,调用该方法以创建或更新 .下面的清单显示了此类的摘录。KitchenServiceCommandHandlerOrder ServiceKitchenServiceTicket

The KitchenServiceCommandHandler class is an adapter that’s responsible for handling command messages sent by the various sagas implemented by Order Service. This class defines a handler method for each command, which invokes KitchenService to create or update a Ticket. The following listing shows an excerpt of this class.

清单 5.13.处理 saga 发送的命令消息
public class KitchenServiceCommandHandler {

  @Autowired
  private KitchenService kitchenService;

  public CommandHandlers commandHandlers() {                        1
   return CommandHandlersBuilder
          .fromChannel("orderService")
          .onMessage(CreateTicket.class, this::createTicket)
          .onMessage(ConfirmCreateTicket.class,
                  this::confirmCreateTicket)
          .onMessage(CancelCreateTicket.class,
                  this::cancelCreateTicket)
          .build();
 }

 private Message createTicket(CommandMessage<CreateTicket>
                                               cm) {
  CreateTicket command = cm.getCommand();
  long restaurantId = command.getRestaurantId();
  Long ticketId = command.getOrderId();
  TicketDetails ticketDetails =
      command.getTicketDetails();

  try {
    Ticket ticket =                                                 2
       kitchenService.createTicket(restaurantId,
                                   ticketId, ticketDetails);
    CreateTicketReply reply =
                new CreateTicketReply(ticket.getId());
    return withSuccess(reply);                                      3
   } catch (RestaurantDetailsVerificationException e) {
    return withFailure();                                           4
   }
 }

 private Message confirmCreateTicket
         (CommandMessage<ConfirmCreateTicket> cm) {                 5
      Long ticketId = cm.getCommand().getTicketId();
     kitchenService.confirmCreateTicket(ticketId);
     return withSuccess();
 }

   ...
public class KitchenServiceCommandHandler {

  @Autowired
  private KitchenService kitchenService;

  public CommandHandlers commandHandlers() {                        1
   return CommandHandlersBuilder
          .fromChannel("orderService")
          .onMessage(CreateTicket.class, this::createTicket)
          .onMessage(ConfirmCreateTicket.class,
                  this::confirmCreateTicket)
          .onMessage(CancelCreateTicket.class,
                  this::cancelCreateTicket)
          .build();
 }

 private Message createTicket(CommandMessage<CreateTicket>
                                               cm) {
  CreateTicket command = cm.getCommand();
  long restaurantId = command.getRestaurantId();
  Long ticketId = command.getOrderId();
  TicketDetails ticketDetails =
      command.getTicketDetails();

  try {
    Ticket ticket =                                                 2
       kitchenService.createTicket(restaurantId,
                                   ticketId, ticketDetails);
    CreateTicketReply reply =
                new CreateTicketReply(ticket.getId());
    return withSuccess(reply);                                      3
   } catch (RestaurantDetailsVerificationException e) {
    return withFailure();                                           4
   }
 }

 private Message confirmCreateTicket
         (CommandMessage<ConfirmCreateTicket> cm) {                 5
      Long ticketId = cm.getCommand().getTicketId();
     kitchenService.confirmCreateTicket(ticketId);
     return withSuccess();
 }

   ...

  • 1 将命令消息映射到消息处理程序
  • 1 Maps command messages to message handlers
  • 2 调用 KitchenService 创建票据
  • 2 Invokes KitchenService to create the Ticket
  • 3 发回成功的回复
  • 3 Sends back a successful reply
  • 4 发回失败回复
  • 4 Sends back a failure reply
  • 5 确认订单
  • 5 Confirms the order

所有命令处理程序方法都会调用并使用成功或失败回复进行回复。KitchenService

All the command handler methods invoke KitchenService and reply with either a success or a failure reply.

现在,您已经了解了一个相对简单的服务的业务逻辑,我们将看一个更复杂的示例:.Order Service

Now that you’ve seen the business logic for a relatively simple service, we’ll look at a more complex example: Order Service.

5.5. Order Service 业务逻辑

5.5. Order Service business logic

如前几章所述,提供用于创建、更新和取消订单的 API。此 API 主要由使用者调用。图 5.12 显示了该服务的高级设计。聚合是 的中心聚合。但还有一个聚合,它是 拥有的数据的部分副本。它支持验证 的行项目并为其定价。Order ServiceOrderOrder ServiceRestaurantRestaurant ServiceOrder ServiceOrder

As mentioned in earlier chapters, Order Service provides an API for creating, updating, and canceling orders. This API is primarily invoked by the consumer. Figure 5.12 shows the high-level design of the service. The Order aggregate is the central aggregate of Order Service. But there’s also a Restaurant aggregate, which is a partial replica of data owned by Restaurant Service. It enables Order Service to validate and price an Order’s line items.

图 5.12.的设计 .它有一个用于管理订单的 REST API。它通过多个消息通道与其他服务交换消息和事件。Order Service

除了 和 聚合之外,业务逻辑还包括 、 、 和各种 saga,如第 4 章中描述的。 是业务逻辑的主要入口点,它定义了用于创建和更新的方法。 定义用于 persisting 的方法 ,并具有用于 persisting 的方法。 具有多个入站适配器:OrderRestaurantOrderServiceOrderRepositoryRestaurantRepositoryCreateOrderSagaOrderServiceOrdersRestaurantsOrderRepositoryOrdersRestaurantRepositoryRestaurantsOrder Service

In addition to the Order and Restaurant aggregates, the business logic consists of OrderService, OrderRepository, RestaurantRepository, and various sagas such as the CreateOrderSaga described in chapter 4. OrderService is the primary entry point into the business logic and defines methods for creating and updated Orders and Restaurants. OrderRepository defines methods for persisting Orders, and RestaurantRepository has methods for persisting Restaurants. Order Service has several inbound adapters:

  • REST API使用者使用的用户界面调用的 REST API。它调用 create 和 update 。OrderServiceOrders
  • REST APIThe REST API invoked by the user interface used by consumers. It invokes OrderService to create and update Orders.
  • OrderEventConsumer订阅 发布的事件。它调用以创建和更新其 的副本。Restaurant ServiceOrderServiceRestaurants
  • OrderEventConsumerSubscribes to events published by Restaurant Service. It invokes OrderService to create and update its replica of Restaurants.
  • OrderCommandHandlers由 sagas 调用的基于异步请求/响应的 API。它调用 update .OrderServiceOrders
  • OrderCommandHandlersThe asynchronous request/response-based API that’s invoked by sagas. It invokes OrderService to update Orders.
  • SagaReplyAdapter订阅 saga 回复频道并调用 saga。
  • SagaReplyAdapterSubscribes to the saga reply channels and invokes the sagas.

该服务还具有一些出站适配器:

The service also has some outbound adapters:

  • DB 适配器实现接口并访问数据库OrderRepositoryOrder Service
  • DB adapterImplements the OrderRepository interface and accesses the Order Service database
  • DomainEventPublishingAdapter - 实现接口并发布域事件DomainEventPublisherOrder
  • DomainEventPublishingAdapterImplements the DomainEventPublisher interface and publishes Order domain events
  • OutboundCommandMessageAdapter实现接口并向 saga 参与者发送命令消息CommandPublisher
  • OutboundCommandMessageAdapterImplements the CommandPublisher interface and sends command messages to saga participants

让我们首先仔细查看聚合,然后检查 .OrderOrderService

Let’s first take a closer look at the Order aggregate and then examine OrderService.

5.5.1. 订单聚合

5.5.1. The Order Aggregate

聚合表示使用者下的订单。我们首先查看聚合的结构,然后查看其方法。OrderOrder

The Order aggregate represents an order placed by a consumer. We’ll first look at the structure of the Order aggregate and then check out its methods.

Order 聚合的结构

图 5.13 显示了聚合的结构。该类是聚合的根。聚合还包含值对象,如 、 和 。OrderOrderOrderOrderOrderLineItemDeliveryInfoPaymentInfo

Figure 5.13 shows the structure of the Order aggregate. The Order class is the root of the Order aggregate. The Order aggregate also consists of value objects such as OrderLineItem, DeliveryInfo, and PaymentInfo.

图 5.13.聚合的设计,由聚合根和各种值对象组成。OrderOrder

该类具有 .由于 的 和 是其他聚合,因此它按主键值引用它们。该类有一个类,用于存储交货地址和所需的交货时间,以及一个 ,用于存储付款信息。下面的清单显示了代码。OrderOrderLineItemsOrderConsumerRestaurantOrderDeliveryInfoPaymentInfo

The Order class has a collection of OrderLineItems. Because the Order’s Consumer and Restaurant are other aggregates, it references them by primary key value. The Order class has a DeliveryInfo class, which stores the delivery address and the desired delivery time, and a PaymentInfo, which stores the payment info. The following listing shows the code.

清单 5.14.类及其字段Order
@Entity
@Table(name="orders")
@Access(AccessType.FIELD)
public class Order {

  @Id
  @GeneratedValue
  private Long id;

  @Version
  private Long version;

  private OrderState state;
  private Long consumerId;
  private Long restaurantId;

  @Embedded
  private OrderLineItems orderLineItems;

  @Embedded
  private DeliveryInformation deliveryInformation;

  @Embedded
  private PaymentInformation paymentInformation;

  @Embedded
  private Money orderMinimum;
@Entity
@Table(name="orders")
@Access(AccessType.FIELD)
public class Order {

  @Id
  @GeneratedValue
  private Long id;

  @Version
  private Long version;

  private OrderState state;
  private Long consumerId;
  private Long restaurantId;

  @Embedded
  private OrderLineItems orderLineItems;

  @Embedded
  private DeliveryInformation deliveryInformation;

  @Embedded
  private PaymentInformation paymentInformation;

  @Embedded
  private Money orderMinimum;

此类使用 JPA 持久化,并映射到表。字段是主键。该字段用于乐观锁定。an 的状态由枚举表示。的 and 字段使用注释进行映射,并存储为表的列。该字段是包含订单行项目的嵌入对象。聚合不仅仅包含字段。它还实现了业务逻辑,这可以用状态机来描述。 我们来看看状态机。ORDERSidversionOrderOrderStateDeliveryInformationPaymentInformation@EmbeddedORDERSorderLineItemsOrder

This class is persisted with JPA and is mapped to the ORDERS table. The id field is the primary key. The version field is used for optimistic locking. The state of an Order is represented by the OrderState enumeration. The DeliveryInformation and PaymentInformation fields are mapped using the @Embedded annotation and are stored as columns of the ORDERS table. The orderLineItems field is an embedded object that contains the order line items. The Order aggregate consists of more than just fields. It also implements business logic, which can be described by a state machine. Let’s take a look at the state machine.

Order 聚合状态机

要创建或更新订单,必须使用 saga 与其他服务协作。saga 的 Either 或 first step 调用一个方法,用于验证是否可以执行操作,并将 的状态更改为 pending 状态。如第 4 章所述,待处理状态是语义锁对策的一个示例,它有助于确保 Sagas 彼此隔离。最终 一旦 Saga 调用了参与的服务,它就会更新 以反映结果。例如,如第 4 章所述,具有多个参与者服务,包括 、 和 。 首先创建一个 In 状态,然后将其状态更改为 OR 。其行为可以建模为图 5.14 中所示的状态机。Order ServiceOrderServiceOrderOrderOrderCreate Order SagaConsumer ServiceAccounting ServiceKitchen ServiceOrderServiceOrderAPPROVAL_PENDINGAPPROVEDREJECTEDOrder

In order to create or update an order, Order Service must collaborate with other services using sagas. Either OrderService or the first step of the saga invokes an Order method that verifies that the operation can be performed and changes the state of the Order to a pending state. A pending state, as explained in chapter 4, is an example of a semantic lock countermeasure, which helps ensure that sagas are isolated from one another. Eventually, once the saga has invoked the participating services, it then updates the Order to reflect the outcome. For example, as described in chapter 4, the Create Order Saga has multiple participant services, including Consumer Service, Accounting Service, and Kitchen Service. OrderService first creates an Order in an APPROVAL_PENDING state, and then later changes its state to either APPROVED or REJECTED. The behavior of an Order can be modeled as the state machine shown in figure 5.14.

图 5.14.聚合状态机模型的一部分Order

同样,其他操作(如 and first)会将 更改为 pending 状态,并使用 saga 验证是否可以执行该操作。然后,一旦 saga 验证了 操作,它会将转换更改为反映操作成功结果的其他状态。如果验证操作 失败,则会恢复到之前的状态。例如,该操作首先将 转换为 state.如果订单可以取消,则 会将 的状态更改为 状态。否则,如果操作被拒绝,例如,取消订单为时已晚,则转换将返回该状态。Order Servicerevise()cancel()OrderOrderOrdercancel()OrderCANCEL_PENDINGCancel Order SagaOrderCANCELLEDcancel()OrderAPPROVED

Similarly, other Order Service operations such as revise() and cancel() first change the Order to a pending state and use a saga to verify that the operation can be performed. Then, once the saga has verified that the operation can be performed, it changes the Order transitions to some other state that reflects the successful outcome of the operation. If the verification of the operation fails, the Order reverts to the previous state. For example, the cancel() operation first transitions the Order to the CANCEL_PENDING state. If the order can be cancelled, the Cancel Order Saga changes the state of the Order to the CANCELLED state. Otherwise, if a cancel() operation is rejected because, for example, it’s too late to cancel the order, then the Order transitions back to the APPROVED state.

现在让我们看看 aggregate 是如何实现这个 state machine 的。Order

Let’s now look at the how the Order aggregate implements this state machine.

Order 聚合的方法

该类有几组方法,每组方法对应于一个 saga。在每个组中,在开始时调用一个方法 中,其他方法在最后调用。我将首先讨论创建 .之后,我们将看看 an 是如何更新的。下面的清单显示了在创建 .OrderOrderOrderOrderOrder

The Order class has several groups of methods, each of which corresponds to a saga. In each group, one method is invoked at the start of the saga, and the other methods are invoked at the end. I’ll first discuss the business logic that creates an Order. After that we’ll look at how an Order is updated. The following listing shows the Order’s methods that are invoked during the process of creating an Order.

清单 5.15.在订单创建过程中调用的方法
public class Order { ...

  public static ResultWithDomainEvents<Order, OrderDomainEvent>
   createOrder(long consumerId, Restaurant restaurant,
                                        List<OrderLineItem> orderLineItems) {
    Order order = new Order(consumerId, restaurant.getId(), orderLineItems);
    List<OrderDomainEvent> events = singletonList(new OrderCreatedEvent(
            new OrderDetails(consumerId, restaurant.getId(), orderLineItems,
                    order.getOrderTotal()),
            restaurant.getName()));
    return new ResultWithDomainEvents<>(order, events);
  }

  public Order(OrderDetails orderDetails) {
    this.orderLineItems = new OrderLineItems(orderDetails.getLineItems());
    this.orderMinimum = orderDetails.getOrderMinimum();
    this.state = APPROVAL_PENDING;
  }
  ...

  public List<DomainEvent> noteApproved() {
    switch (state) {
      case APPROVAL_PENDING:
        this.state = APPROVED;
        return singletonList(new OrderAuthorized());
      ...
      default:
        throw new UnsupportedStateTransitionException(state);
    }
  }

  public List<DomainEvent> noteRejected() {
    switch (state) {
      case APPROVAL_PENDING:
        this.state = REJECTED;
        return singletonList(new OrderRejected());
        ...
      default:
        throw new UnsupportedStateTransitionException(state);
    }

  }
public class Order { ...

  public static ResultWithDomainEvents<Order, OrderDomainEvent>
   createOrder(long consumerId, Restaurant restaurant,
                                        List<OrderLineItem> orderLineItems) {
    Order order = new Order(consumerId, restaurant.getId(), orderLineItems);
    List<OrderDomainEvent> events = singletonList(new OrderCreatedEvent(
            new OrderDetails(consumerId, restaurant.getId(), orderLineItems,
                    order.getOrderTotal()),
            restaurant.getName()));
    return new ResultWithDomainEvents<>(order, events);
  }

  public Order(OrderDetails orderDetails) {
    this.orderLineItems = new OrderLineItems(orderDetails.getLineItems());
    this.orderMinimum = orderDetails.getOrderMinimum();
    this.state = APPROVAL_PENDING;
  }
  ...

  public List<DomainEvent> noteApproved() {
    switch (state) {
      case APPROVAL_PENDING:
        this.state = APPROVED;
        return singletonList(new OrderAuthorized());
      ...
      default:
        throw new UnsupportedStateTransitionException(state);
    }
  }

  public List<DomainEvent> noteRejected() {
    switch (state) {
      case APPROVAL_PENDING:
        this.state = REJECTED;
        return singletonList(new OrderRejected());
        ...
      default:
        throw new UnsupportedStateTransitionException(state);
    }

  }

该方法是一个静态工厂方法,用于创建 Order 并发布一个 .丰富了 的详细信息 ,包括行项目、总金额、餐厅 ID 和餐厅名称。第 7 章讨论了如何使用事件(包括 )来维护易于查询的 副本。createOrder()OrderCreatedEventOrderCreatedEventOrderOrder History ServiceOrderOrderCreatedEventOrders

The createOrder() method is a static factory method that creates an Order and publishes an OrderCreatedEvent. The OrderCreatedEvent is enriched with the details of the Order, including the line items, the total amount, the restaurant ID, and the restaurant name. Chapter 7 discusses how Order History Service uses Order events, including OrderCreatedEvent, to maintain an easily queried replica of Orders.

的初始状态为 。完成后,它将调用 或 。当消费者的信用卡成功授权时,将调用该方法。当其中一个服务拒绝订单或授权失败时,将调用该方法。如您所见,聚合的 the 决定了其大多数方法的行为。与聚合一样,它也发出事件。OrderAPPROVAL_PENDINGCreateOrderSaganoteApproved()noteRejected()noteApproved()noteRejected()stateOrderTicket

The initial state of the Order is APPROVAL_PENDING. When the CreateOrderSaga completes, it will invoke either noteApproved() or noteRejected(). The noteApproved() method is invoked when the consumer’s credit card has been successfully authorized. The noteRejected() method is called when one of the services rejects the order or authorization fails. As you can see, the state of the Order aggregate determines the behavior of most of its methods. Like the Ticket aggregate, it also emits events.

除了 之外,该类还定义了多个 update 方法。例如,它通过首先调用该方法来修订订单,然后在验证可以进行修订后,它调用该方法。下面的清单显示了这些方法。createOrder()OrderRevise Order Sagarevise()confirmRevised()

In addition to createOrder(), the Order class defines several update methods. For example, the Revise Order Saga revises an order by first invoking the revise() method and then, once it’s verified that the revision can be made, it invokes the confirmRevised() method. The following listing shows these methods.

清单 5.16.修改OrderOrder
class Order ...

  public List<OrderDomainEvent> revise(OrderRevision orderRevision) {
    switch (state) {

      case APPROVED:
        LineItemQuantityChange change =
                orderLineItems.lineItemQuantityChange(orderRevision);
        if (change.newOrderTotal.isGreaterThanOrEqual(orderMinimum)) {
          throw new OrderMinimumNotMetException();
        }
        this.state = REVISION_PENDING;
        return singletonList(new OrderRevisionProposed(orderRevision,
                          change.currentOrderTotal, change.newOrderTotal));

      default:
        throw new UnsupportedStateTransitionException(state);
    }
  }

  public List<OrderDomainEvent> confirmRevision(OrderRevision orderRevision) {
    switch (state) {
      case REVISION_PENDING:
        LineItemQuantityChange licd =
          orderLineItems.lineItemQuantityChange(orderRevision);

        orderRevision
              .getDeliveryInformation()
              .ifPresent(newDi -> this.deliveryInformation = newDi);

        if (!orderRevision.getRevisedLineItemQuantities().isEmpty()) {
          orderLineItems.updateLineItems(orderRevision);
        }

        this.state = APPROVED;
        return singletonList(new OrderRevised(orderRevision,
                          licd.currentOrderTotal, licd.newOrderTotal));
      default:
        throw new UnsupportedStateTransitionException(state);
    }
  }

}
class Order ...

  public List<OrderDomainEvent> revise(OrderRevision orderRevision) {
    switch (state) {

      case APPROVED:
        LineItemQuantityChange change =
                orderLineItems.lineItemQuantityChange(orderRevision);
        if (change.newOrderTotal.isGreaterThanOrEqual(orderMinimum)) {
          throw new OrderMinimumNotMetException();
        }
        this.state = REVISION_PENDING;
        return singletonList(new OrderRevisionProposed(orderRevision,
                          change.currentOrderTotal, change.newOrderTotal));

      default:
        throw new UnsupportedStateTransitionException(state);
    }
  }

  public List<OrderDomainEvent> confirmRevision(OrderRevision orderRevision) {
    switch (state) {
      case REVISION_PENDING:
        LineItemQuantityChange licd =
          orderLineItems.lineItemQuantityChange(orderRevision);

        orderRevision
              .getDeliveryInformation()
              .ifPresent(newDi -> this.deliveryInformation = newDi);

        if (!orderRevision.getRevisedLineItemQuantities().isEmpty()) {
          orderLineItems.updateLineItems(orderRevision);
        }

        this.state = APPROVED;
        return singletonList(new OrderRevised(orderRevision,
                          licd.currentOrderTotal, licd.newOrderTotal));
      default:
        throw new UnsupportedStateTransitionException(state);
    }
  }

}

调用该方法以启动订单的修订。除其他外,它还验证修改后的命令不会违规 订单最小值,并将订单状态更改为 。成功更新 和 后,它会调用 以完成修订。revise()REVISION_PENDINGRevise Order SagaKitchen ServiceAccounting ServiceconfirmRevision()

The revise() method is called to initiate the revision of an order. Among other things, it verifies that the revised order won’t violate the order minimum and changes the state of the order to REVISION_PENDING. Once Revise Order Saga has successfully updated Kitchen Service and Accounting Service, it then calls confirmRevision() to complete the revision.

这些方法由 调用。让我们看一下该类。OrderService

These methods are invoked by OrderService. Let’s take a look at that class.

5.5.2. OrderService 类

5.5.2. The OrderService class

该类定义用于创建和更新 的方法。它是业务逻辑的主要入口点,由各种入站适配器调用,例如 .它的大多数方法都会创建一个 saga 来编排聚合的创建和更新。因此,此服务比前面讨论的类更复杂。下面的清单显示了此类的摘录。 注入了各种依赖项,包括 、 和多个 saga 管理器。它定义了多种方法,包括 和 。OrderServiceOrdersREST APIOrderKitchenServiceOrderServiceOrderRepositoryOrderDomainEventPublishercreateOrder()reviseOrder()

The OrderService class defines methods for creating and updating Orders. It’s the main entry point into the business logic and is invoked by various inbound adapters, such as the REST API. Most of its methods create a saga to orchestrate the creation and updating of Order aggregates. As a result, this service is more complicated than the KitchenService class discussed earlier. The following listing shows an excerpt of this class. OrderService is injected with various dependencies, including OrderRepository, OrderDomainEventPublisher, and several saga managers. It defines several methods, including createOrder() and reviseOrder().

清单 5.17.该类具有创建和管理订单的方法OrderService
@Transactional
public class OrderService {

  @Autowired
  private OrderRepository orderRepository;

  @Autowired
  private SagaManager<CreateOrderSagaState, CreateOrderSagaState>
    createOrderSagaManager;

  @Autowired
  private SagaManager<ReviseOrderSagaState, ReviseOrderSagaData>
    reviseOrderSagaManagement;

  @Autowired
  private OrderDomainEventPublisher orderAggregateEventPublisher;

  public Order createOrder(OrderDetails orderDetails) {

    Restaurant restaurant = restaurantRepository.findById(restaurantId)
            .orElseThrow(() -
     > new RestaurantNotFoundException(restaurantId));

    List<OrderLineItem> orderLineItems =                                  1
       makeOrderLineItems(lineItems, restaurant);

    ResultWithDomainEvents<Order, OrderDomainEvent> orderAndEvents =
            Order.createOrder(consumerId, restaurant, orderLineItems);

    Order order = orderAndEvents.result;

    orderRepository.save(order);                                          2

    orderAggregateEventPublisher.publish(order, orderAndEvents.events);   3

    OrderDetails orderDetails =
      new OrderDetails(consumerId, restaurantId, orderLineItems,
                        order.getOrderTotal());
    CreateOrderSagaState data = new CreateOrderSagaState(order.getId(),
            orderDetails);

    createOrderSagaManager.create(data, Order.class, order.getId());      4

    return order;
  }

  public Order reviseOrder(Long orderId, Long expectedVersion,
                                OrderRevision orderRevision)  {
    public Order reviseOrder(long orderId, OrderRevision orderRevision) {
      Order order = orderRepository.findById(orderId)                     5
               .orElseThrow(() -> new OrderNotFoundException(orderId));
      ReviseOrderSagaData sagaData =
        new ReviseOrderSagaData(order.getConsumerId(), orderId,
              null, orderRevision);
      reviseOrderSagaManager.create(sagaData);                            6
       return order;
    }
  }
@Transactional
public class OrderService {

  @Autowired
  private OrderRepository orderRepository;

  @Autowired
  private SagaManager<CreateOrderSagaState, CreateOrderSagaState>
    createOrderSagaManager;

  @Autowired
  private SagaManager<ReviseOrderSagaState, ReviseOrderSagaData>
    reviseOrderSagaManagement;

  @Autowired
  private OrderDomainEventPublisher orderAggregateEventPublisher;

  public Order createOrder(OrderDetails orderDetails) {

    Restaurant restaurant = restaurantRepository.findById(restaurantId)
            .orElseThrow(() -
     > new RestaurantNotFoundException(restaurantId));

    List<OrderLineItem> orderLineItems =                                  1
       makeOrderLineItems(lineItems, restaurant);

    ResultWithDomainEvents<Order, OrderDomainEvent> orderAndEvents =
            Order.createOrder(consumerId, restaurant, orderLineItems);

    Order order = orderAndEvents.result;

    orderRepository.save(order);                                          2

    orderAggregateEventPublisher.publish(order, orderAndEvents.events);   3

    OrderDetails orderDetails =
      new OrderDetails(consumerId, restaurantId, orderLineItems,
                        order.getOrderTotal());
    CreateOrderSagaState data = new CreateOrderSagaState(order.getId(),
            orderDetails);

    createOrderSagaManager.create(data, Order.class, order.getId());      4

    return order;
  }

  public Order reviseOrder(Long orderId, Long expectedVersion,
                                OrderRevision orderRevision)  {
    public Order reviseOrder(long orderId, OrderRevision orderRevision) {
      Order order = orderRepository.findById(orderId)                     5
               .orElseThrow(() -> new OrderNotFoundException(orderId));
      ReviseOrderSagaData sagaData =
        new ReviseOrderSagaData(order.getConsumerId(), orderId,
              null, orderRevision);
      reviseOrderSagaManager.create(sagaData);                            6
       return order;
    }
  }

  • 1 创建 Order 聚合
  • 1 Creates the Order aggregate
  • 2 在数据库中保留 Order
  • 2 Persists the Order in the database
  • 3 发布域事件
  • 3 Publishes domain events
  • 4 创建 Create Order 传奇
  • 4 Creates the Create Order Saga
  • 5 检索订单
  • 5 Retrieves the Order
  • 6 创建 Revise Order 传奇
  • 6 Creates the Revise Order Saga

该方法首先创建并保留一个聚合。然后,它会发布聚合发出的域事件。最后,它会创建一个 .将检索 ,然后创建一个 .createOrder()OrderCreateOrderSagareviseOrder()OrderReviseOrderSaga

The createOrder() method first creates and persists an Order aggregate. It then publishes the domain events emitted by the aggregate. Finally, it creates a CreateOrderSaga. The reviseOrder() retrieves the Order and then creates a ReviseOrderSaga.

在许多方面,基于微服务的应用程序的业务逻辑与整体式应用程序的业务逻辑没有太大区别。 它由服务、JPA 支持的实体和存储库等类组成。不过,也有一些不同之处。一个域 model 被组织为一组施加各种设计约束的 DDD 聚合。与传统对象模型不同, 不同聚合中类之间的引用是主键值而不是对象引用。也 事务只能创建或更新单个聚合。聚合在以下情况下发布域事件也很有用 他们的状态会发生变化。

In many ways, the business logic for a microservices-based application is not that different from that of a monolithic application. It’s comprised of classes such as services, JPA-backed entities, and repositories. There are some differences, though. A domain model is organized as a set of DDD aggregates that impose various design constraints. Unlike in a traditional object model, references between classes in different aggregates are in terms of primary key value rather than object references. Also, a transaction can only create or update a single aggregate. It’s also useful for aggregates to publish domain events when their state changes.

另一个主要区别是,服务通常使用 saga 来维护多个服务之间的数据一致性。举例来说,只参与传纪,它不会启动它们。相比之下,在创建和更新订单时严重依赖 saga。这是因为必须在事务上与其他服务拥有的数据一致。因此,大多数方法都会创建一个 saga,而不是直接更新 saga。Kitchen ServiceOrder ServiceOrdersOrderServiceOrder

Another major difference is that services often use sagas to maintain data consistency across multiple services. For example, Kitchen Service merely participates in sagas, it doesn’t initiate them. In contrast, Order Service relies heavily on sagas when creating and updating orders. That’s because Orders must be transactionally consistent with data owned by other services. As a result, most OrderService methods create a saga rather than update an Order directly.

本章介绍了如何使用传统的持久性方法实现业务逻辑。这涉及到整合 使用 Database Transaction Management 进行消息传递和事件发布。事件发布代码与业务交织在一起 逻辑。下一章将介绍事件溯源,这是一种以事件为中心的方法,用于编写业务逻辑,其中事件生成 是业务逻辑不可或缺的一部分,而不是附加在外。

This chapter has covered how to implement business logic using a traditional approach to persistence. That has involved integrating messaging and event publishing with database transaction management. The event publishing code is intertwined with the business logic. The next chapter looks at event sourcing, an event-centric approach to writing business logic where event generation is integral to the business logic rather than being bolted on.

总结

Summary

  • 过程 Transaction 脚本模式通常是实现简单业务逻辑的好方法。但是当实现复杂的 业务逻辑,您应该考虑使用面向对象的 Domain 模型模式。
  • The procedural Transaction script pattern is often a good way to implement simple business logic. But when implementing complex business logic you should consider using the object-oriented Domain model pattern.
  • 组织服务业务逻辑的一种好方法是作为 DDD 聚合的集合。DDD 聚合非常有用,因为 它们将域模型模块化,消除了服务之间对象引用的可能性,并确保每个 ACID 事务在服务内。
  • A good way to organize a service’s business logic is as a collection of DDD aggregates. DDD aggregates are useful because they modularize the domain model, eliminate the possibility of object reference between services, and ensure that each ACID transaction is within a service.
  • 聚合应在创建或更新时发布域事件。域事件具有多种用途。第 4 章讨论了他们如何实现基于 Choreography 的 Sagas。在第 7 章中,我将讨论如何使用域事件来更新复制的数据。域事件订阅者还可以通知用户和其他 应用程序,并将 WebSocket 消息发布到用户的浏览器。
  • An aggregate should publish domain events when it’s created or updated. Domain events have a wide variety of uses. Chapter 4 discusses how they can implement choreography-based sagas. And, in chapter 7, I talk about how to use domain events to update replicated data. Domain event subscribers can also notify users and other applications, and publish WebSocket messages to a user’s browser.

第 6 章.使用事件溯源开发业务逻辑

Chapter 6. Developing business logic with event sourcing

本章涵盖

This chapter covers

  • 使用 Event sourcing 模式开发业务逻辑
  • Using the Event sourcing pattern to develop business logic
  • 实现事件存储
  • Implementing an event store
  • 集成基于 saga 和事件溯源的业务逻辑
  • Integrating sagas and event sourcing-based business logic
  • 使用事件溯源实施 saga 编排器
  • Implementing saga orchestrators using event sourcing

Mary 喜欢第 5 章中描述的将业务逻辑构建为发布域事件的 DDD 聚合集合的想法。她可以想象 这些事件在微服务架构中非常有用。Mary 计划使用事件来实现基于 choreography 的 Sagas,它维护了跨服务的数据一致性,第 4 章对此进行了介绍。她还希望使用 CQRS 视图,即支持第 7 章中描述的高效查询的副本。

Mary liked the idea, described in chapter 5, of structuring business logic as a collection of DDD aggregates that publish domain events. She could imagine the use of those events being extremely useful in a microservice architecture. Mary planned to use events to implement choreography-based sagas, which maintain data consistency across services and are described in chapter 4. She also expected to use CQRS views, replicas that support efficient querying that are described in chapter 7.

但是,她担心事件发布逻辑可能容易出错。一方面,事件发布逻辑是 相当简单。初始化或更改聚合状态的聚合的每个方法都会返回 事件列表。然后,域服务发布这些事件。但另一方面,事件发布逻辑与业务逻辑相连。即使开发人员 忘记发布事件。Mary 担心这种发布事件的方式可能会成为 bug 的来源。

She was, however, worried that the event publishing logic might be error prone. On one hand, the event publishing logic is reasonably straightforward. Each of an aggregate’s methods that initializes or changes the state of the aggregate returns a list of events. The domain service then publishes those events. But on the other hand, the event publishing logic is bolted on to the business logic. The business logic continues to work even when the developer forgets to publish an event. Mary was concerned that this way of publishing events might be a source of bugs.

许多年前,Mary 已经了解了事件溯源,这是一种以事件为中心的编写业务逻辑和持久化域对象的方法。当时,她对众多 好处,包括它如何保留聚合更改的完整历史记录,但这仍然是一个好奇心。鉴于 域事件在微服务架构中的重要性,她现在想知道是否值得探索使用 FTGO 应用程序中的事件溯源。毕竟,事件溯源通过保证 每当创建或更新聚合时,都会发布事件。

Many years ago, Mary had learned about event sourcing, an event-centric way of writing business logic and persisting domain objects. At the time she was intrigued by its numerous benefits, including how it preserves the complete history of the changes to an aggregate, but it remained a curiosity. Given the importance of domain events in microservice architecture, she now wonders whether it would be worthwhile to explore using event sourcing in the FTGO application. After all, event sourcing eliminates a source of programming errors by guaranteeing that an event will be published whenever an aggregate is created or updated.

本章的开头介绍了事件溯源的工作原理以及如何使用它来编写业务逻辑。我描述一下 事件溯源将每个聚合作为事件序列保存在所谓的事件存储中。我将讨论事件溯源的优缺点,并介绍如何实现事件存储。我描述了一个简单的框架 用于编写基于事件溯源的业务逻辑。之后,我将讨论事件溯源如何成为实现的良好基础 传说。让我们首先看看如何使用事件溯源开发业务逻辑。

I begin this chapter by describing how event sourcing works and how you can use it to write business logic. I describe how event sourcing persists each aggregate as a sequence of events in what is known as an event store. I discuss the benefits and drawbacks of event sourcing and cover how to implement an event store. I describe a simple framework for writing event sourcing-based business logic. After that, I discuss how event sourcing is a good foundation for implementing sagas. Let’s start by looking at how to develop business logic with event sourcing.

6.1. 使用事件溯源开发业务逻辑

6.1. Developing business logic using event sourcing

事件溯源是构建业务逻辑和持久化聚合的另一种方式。它将聚合持久化为 一系列事件。每个事件都表示聚合的状态更改。应用程序重新创建 通过重放事件进行聚合。

Event sourcing is a different way of structuring the business logic and persisting aggregates. It persists an aggregate as a sequence of events. Each event represents a state change of the aggregate. An application recreates the current state of an aggregate by replaying the events.

模式:事件溯源

将聚合保留为表示状态更改的域事件序列。请参阅 http://microservices.io/patterns/data/event-sourcing.html

Persist an aggregate as a sequence of domain events that represent state changes. See http://microservices.io/patterns/data/event-sourcing.html.

事件溯源有几个重要的好处。例如,它保留了聚合的历史记录,这对于 审计和监管目的。它可靠地发布域事件,这在微服务中特别有用 建筑。事件溯源也有缺点。它涉及一个学习曲线,因为它是一种不同的编写 业务逻辑。此外,查询事件存储通常很困难,这需要您使用第 7 章中描述的 CQRS 模式。

Event sourcing has several important benefits. For example, it preserves the history of aggregates, which is valuable for auditing and regulatory purposes. And it reliably publishes domain events, which is particularly useful in a microservice architecture. Event sourcing also has drawbacks. It involves a learning curve, because it’s a different way to write your business logic. Also, querying the event store is often difficult, which requires you to use the CQRS pattern, described in chapter 7.

在本节开始时,我将介绍传统持久性的局限性。然后,我详细描述了事件溯源,并且 讨论它如何克服这些限制。之后,我将展示如何使用事件溯源实现聚合。最后,我将介绍事件溯源的优点和缺点。Order

I begin this section by describing the limitations of traditional persistence. I then describe event sourcing in detail and talk about how it overcomes those limitations. After that, I show how to implement the Order aggregate using event sourcing. Finally, I describe the benefits and drawbacks of event sourcing.

让我们首先看一下传统持久性方法的局限性。

Let’s first look at the limitations of the traditional approach to persistence.

6.1.1. 传统持久化的问题

6.1.1. The trouble with traditional persistence

传统的持久性方法将类映射到数据库表,将这些类的字段映射到表列和实例 添加到这些表中的行。例如,图 6.1 显示了第 5 章中描述的聚合如何映射到表。它被映射到表。OrderORDEROrderLineItemsORDER_LINE_ITEM

The traditional approach to persistence maps classes to database tables, fields of those classes to table columns, and instances of those classes to rows in those tables. For example, figure 6.1 shows how the Order aggregate, described in chapter 5, is mapped to the ORDER table. Its OrderLineItems are mapped to the ORDER_LINE_ITEM table.

图 6.1.传统的持久性方法将类映射到表,将对象映射到这些表中的行。

应用程序将 order 实例保留为 和 表中的行。它可能会使用 ORM 框架(如 JPA)或较低级别的框架(如 MyBATIS)来实现这一点。ORDERORDER_LINE_ITEM

The application persists an order instance as rows in the ORDER and ORDER_LINE_ITEM tables. It might do that using an ORM framework such as JPA or a lower-level framework such as MyBATIS.

这种方法显然效果很好,因为大多数企业应用程序都以这种方式存储数据。但它有几个缺点和 局限性:

This approach clearly works well because most enterprise applications store data this way. But it has several drawbacks and limitations:

  • 对象关系阻抗失配。
  • Object-Relational impedance mismatch.
  • 缺少聚合历史记录。
  • Lack of aggregate history.
  • 实施审计日志记录很繁琐且容易出错。
  • Implementing audit logging is tedious and error prone.
  • 事件发布与业务逻辑相连。
  • Event publishing is bolted on to the business logic.

让我们看看这些问题中的每一个,从 Object-Relational 阻抗失配问题开始。

Let’s look at each of these problems, starting with the Object-Relational impedance mismatch problem.

对象关系阻抗失配

一个由来已久的问题是所谓的 Object-Relational 阻抗失配问题。表格关系架构与丰富的 domain 模型及其复杂关系。这个问题的某些方面反映在关于适用性的两极分化辩论中 对象/关系映射 (ORM) 框架。例如,Ted Neward 曾说过“对象关系映射就是越南 计算机科学“(http://blogs.tedneward.com/post/the-vietnam-of-computer-science/)。公平地说,我已经成功地使用 Hibernate 开发了数据库架构派生自对象模型的应用程序。但问题 比任何特定 ORM 框架的限制都要深。

One age-old problem is the so-called Object-Relational impedance mismatch problem. There’s a fundamental conceptual mismatch between the tabular relational schema and the graph structure of a rich domain model with its complex relationships. Some aspects of this problem are reflected in polarized debates over the suitability of Object/Relational mapping (ORM) frameworks. For example, Ted Neward has said that “Object-Relational mapping is the Vietnam of Computer Science” (http://blogs.tedneward.com/post/the-vietnam-of-computer-science/). To be fair, I’ve used Hibernate successfully to develop applications where the database schema has been derived from the object model. But the problems are deeper than the limitations of any particular ORM framework.

缺少聚合历史记录

传统持久性的另一个限制是它只存储聚合的当前状态。一旦聚合 已更新,则其之前的状态将丢失。如果应用程序必须保留聚合的历史记录,可能出于监管目的 目的,则开发人员必须自己实现此机制。实现聚合历史记录非常耗时 机制,并涉及必须与业务逻辑同步的复制代码。

Another limitation of traditional persistence is that it only stores the current state of an aggregate. Once an aggregate has been updated, its previous state is lost. If an application must preserve the history of an aggregate, perhaps for regulatory purposes, then developers must implement this mechanism themselves. It is time consuming to implement an aggregate history mechanism and involves duplicating code that must be synchronized with the business logic.

实施审计日志记录很繁琐且容易出错

另一个问题是审计日志记录。许多应用程序必须维护一个审核日志,用于跟踪哪些用户更改了聚合。 某些应用程序需要出于安全或法规目的进行审核。在其他应用程序中,用户操作的历史记录 是一项重要功能。例如,问题跟踪器和任务管理应用程序(如 Asana 和 JIRA)会显示历史记录 任务和问题的更改。实施审计的挑战在于,审计除了是一项耗时的苦差事外,审计 日志记录代码和业务逻辑可能会发散,从而导致错误。

Another issue is audit logging. Many applications must maintain an audit log that tracks which users have changed an aggregate. Some applications require auditing for security or regulatory purposes. In other applications, the history of user actions is an important feature. For example, issue trackers and task-management applications such as Asana and JIRA display the history of changes to tasks and issues. The challenge of implementing auditing is that besides being a time-consuming chore, the auditing logging code and the business logic can diverge, resulting in bugs.

事件发布与业务逻辑相连

传统持久性的另一个限制是它通常不支持发布域事件。域事件、 第 5 章中讨论了聚合在其状态更改时发布的事件。它们是同步数据的有用机制 以及在微服务架构中发送通知。一些 ORM 框架(比如 Hibernate)可以调用应用程序提供的 数据对象更改时的回调。但是,不支持在事务中自动发布消息 更新数据。因此,与历史记录和审计一样,开发人员必须附加事件生成逻辑,这存在风险 未与业务逻辑同步。幸运的是,这些问题有一个解决方案:事件溯源。

Another limitation of traditional persistence is that it usually doesn’t support publishing domain events. Domain events, discussed in chapter 5, are events that are published by an aggregate when its state changes. They’re a useful mechanism for synchronizing data and sending notifications in microservice architecture. Some ORM frameworks, such as Hibernate, can invoke application-provided callbacks when data objects change. But there’s no support for automatically publishing messages as part of the transaction that updates the data. Consequently, as with history and auditing, developers must bolt on event-generation logic, which risks not being synchronized with the business logic. Fortunately, there’s a solution to these issues: event sourcing.

6.1.2. 事件溯源概述

6.1.2. Overview of event sourcing

事件溯源是一种以事件为中心的技术,用于实现业务逻辑和持久化聚合。存储聚合 作为一系列事件。每个事件都表示聚合的状态更改。聚合的业务逻辑 是围绕生成和使用这些事件的要求构建的。让我们看看它是如何工作的。

Event sourcing is an event-centric technique for implementing business logic and persisting aggregates. An aggregate is stored in the database as a series of events. Each event represents a state change of the aggregate. An aggregate’s business logic is structured around the requirement to produce and consume these events. Let’s see how that works.

事件溯源使用事件持久化聚合

在前面的 6.1.1 节中,我讨论了传统的持久性如何将聚合映射到表,将聚合的字段映射到列,如何将实例映射到行。 事件溯源是一种非常不同的持久化聚合方法,它建立在域事件的概念之上。它持续存在 每个聚合为数据库中的事件序列,称为事件存储。

Earlier, in section 6.1.1, I discussed how traditional persistence maps aggregates to tables, their fields to columns, and their instances to rows. Event sourcing is a very different approach to persisting aggregates that builds on the concept of domain events. It persists each aggregate as a sequence of events in the database, known as an event store.

例如,考虑聚合。如图 6.2 所示,事件溯源不是将每个聚合存储为表中的一行,而是将每个聚合保留为表中的一行或多行。每行都是一个域事件,例如 、 等。OrderOrderORDEROrderEVENTSOrder CreatedOrder ApprovedOrder Shipped

Consider, for example, the Order aggregate. As figure 6.2 shows, rather than store each Order as a row in an ORDER table, event sourcing persists each Order aggregate as one or more rows in an EVENTS table. Each row is a domain event, such as Order Created, Order Approved, Order Shipped, and so on.

图 6.2.事件溯源将每个聚合保留为一系列事件。例如,基于 RDBMS 的应用程序可以存储事件 在表格中。EVENTS

当应用程序创建或更新聚合时,它会将聚合发出的事件插入到表中。应用程序通过检索事件并重播事件来从事件存储中加载聚合。具体说来 加载聚合包括以下三个步骤:EVENTS

When an application creates or updates an aggregate, it inserts the events emitted by the aggregate into the EVENTS table. An application loads an aggregate from the event store by retrieving its events and replaying them. Specifically, loading an aggregate consists of the following three steps:

  1. 加载聚合的事件。
  2. Load the events for the aggregate.
  3. 使用其默认构造函数创建聚合实例。
  4. Create an aggregate instance by using its default constructor.
  5. 遍历事件,调用 .apply()
  6. Iterate through the events, calling apply().

例如,稍后在 6.2.2 节中介绍的 Eventuate Client 框架使用类似于以下内容的代码来重建聚合:

For example, the Eventuate Client framework, covered later in section 6.2.2, uses code similar to the following to reconstruct an aggregate:

Class aggregateClass = ...;
Aggregate aggregate = aggregateClass.newInstance();
for (Event event : events) {
  aggregate = aggregate.applyEvent(event);
}
// use aggregate...
Class aggregateClass = ...;
Aggregate aggregate = aggregateClass.newInstance();
for (Event event : events) {
  aggregate = aggregate.applyEvent(event);
}
// use aggregate...

它创建类的实例并循环访问事件,调用聚合的方法。如果您熟悉函数式编程,您可能会将其识别为 fold 或 reduce 操作。applyEvent()

It creates an instance of the class and iterates through the events, calling the aggregate’s applyEvent() method. If you’re familiar with functional programming, you may recognize this as a fold or reduce operation.

通过加载事件和重放事件来重建聚合的内存中状态可能很奇怪和陌生。 但在某些方面,它与 ORM 框架(如 JPA 或 Hibernate)加载实体的方式并没有太大区别。ORM 框架 通过执行一个或多个语句来加载对象,以检索当前持久化状态,并使用对象的默认构造函数实例化对象。它使用反射 以初始化这些对象。事件溯源的不同之处在于,内存中状态的重建已经完成 使用事件。SELECT

It may be strange and unfamiliar to reconstruct the in-memory state of an aggregate by loading the events and replaying events. But in some ways, it’s not all that different from how an ORM framework such as JPA or Hibernate loads an entity. An ORM framework loads an object by executing one or more SELECT statements to retrieve the current persisted state, instantiating objects using their default constructors. It uses reflection to initialize those objects. What’s different about event sourcing is that the reconstruction of the in-memory state is accomplished using events.

现在,让我们看看域事件的要求事件溯源位置。

Let’s now look at the requirements event sourcing places on domain events.

事件表示状态更改

第 5 章将域事件定义为一种通知订阅者聚合更改的机制。事件可以包含最小 数据,例如仅聚合 ID,也可以进行扩充以包含对典型使用者有用的数据。例如 的 可以在创建订单时发布事件。事件只能包含 .或者,该事件可以包含完整的顺序,因此该事件的使用者不必从 .是否发布事件以及这些事件包含哪些内容由使用者的需求决定。使用事件溯源, 但是,确定事件及其结构的主要 aggregate 是 aggregate。Order ServiceOrderCreatedOrderCreatedorderIdOrder Service

Chapter 5 defines domain events as a mechanism for notifying subscribers of changes to aggregates. Events can either contain minimal data, such as just the aggregate ID, or can be enriched to contain data that’s useful to a typical consumer. For example, the Order Service can publish an OrderCreated event when an order is created. An OrderCreated event may only contain the orderId. Alternatively, the event could contain the complete order so consumers of that event don’t have to fetch the data from the Order Service. Whether events are published and what those events contain are driven by the needs of the consumers. With event sourcing, though, it’s primarily the aggregate that determines the events and their structure.

使用事件溯源时,事件不是可选的。表示聚合的每个状态更改 (包括其创建) 由 Domain 事件。每当聚合的状态发生变化时,它都必须发出一个事件。例如,聚合必须在创建时发出一个事件,并在更新时发出一个事件。这是一个比以前更严格的要求,因为聚合只发出事件 消费者感兴趣的。OrderOrderCreatedOrder*

Events aren’t optional when using event sourcing. Every state change of an aggregate, including its creation, is represented by a domain event. Whenever the aggregate’s state changes, it must emit an event. For example, an Order aggregate must emit an OrderCreated event when it’s created, and an Order* event whenever it is updated. This is a much more stringent requirement than before, when an aggregate only emitted events that were of interest to consumers.

此外,事件必须包含聚合执行状态转换所需的数据。聚合的状态 由组成聚合的对象字段的值组成。状态更改可能就像更改一样简单 对象的字段的值,例如 .或者,状态更改可能涉及添加或删除对象,例如修订 的行项目。Order.stateOrder

What’s more, an event must contain the data that the aggregate needs to perform the state transition. The state of an aggregate consists of the values of the fields of the objects that comprise the aggregate. A state change might be as simple as changing the value of the field of an object, such as Order.state. Alternatively, a state change can involve adding or removing objects, such as revising an Order’s line items.

假设,如图 6.3 所示,聚合的当前状态为 ,新状态为 。表示状态更改的事件必须包含数据,以便当 处于 state 时,调用将更新 to state 。在下一节中,您将看到 this is a method that performs the state change by event (由事件表示)SS'EOrderSorder.apply(E)OrderS'apply()

Suppose, as figure 6.3 shows, that the current state of the aggregate is S and the new state is S'. An event E that represents the state change must contain the data such that when an Order is in state S, calling order.apply(E) will update the Order to state S'. In the next section you’ll see that apply() is a method that performs the state change represented by an event.

图 6.3.在 处于 状态时应用事件必须将状态更改为 。该事件必须包含执行状态更改所需的数据。EOrderSOrderS'

某些事件(如事件)包含少量数据或不包含数据,仅表示状态转换。该方法通过将 的 status 字段更改为 来处理事件。但是,其他事件包含大量数据。例如,事件必须包含初始化方法所需的所有数据 ,包括其行项目、付款信息、投放信息等。由于事件用于持久化聚合,因此 您无法再选择使用包含 .Order Shippedapply()Order ShippedOrderSHIPPEDOrderCreatedapply()OrderOrderCreatedorderId

Some events, such as the Order Shipped event, contain little or no data and just represent the state transition. The apply() method handles an Order Shipped event by changing the Order’s status field to SHIPPED. Other events, however, contain a lot of data. An OrderCreated event, for example, must contain all the data needed by the apply() method to initialize an Order, including its line items, payment information, delivery information, and so on. Because events are used to persist an aggregate, you no longer have the option of using a minimal OrderCreated event that contains the orderId.

聚合方法都是关于事件的

业务逻辑通过对聚合根调用 command 方法来处理更新聚合的请求。在传统的 application 中,命令方法通常会验证其参数,然后更新聚合的一个或多个字段。命令 方法有效,因为它们必须生成事件。如图 6.4 所示,调用聚合的 command 方法的结果是一系列事件,这些事件表示 必须做出。这些事件将保留在数据库中,并应用于聚合以更新其状态。

The business logic handles a request to update an aggregate by calling a command method on the aggregate root. In a traditional application, a command method typically validates its arguments and then updates one or more of the aggregate’s fields. Command methods in an event sourcing-based application work because they must generate events. As figure 6.4 shows, the outcome of invoking an aggregate’s command method is a sequence of events that represent the state changes that must be made. These events are persisted in the database and applied to the aggregate to update its state.

图 6.4.处理命令会生成事件,而不会更改聚合的状态。通过应用 事件。

生成事件并应用事件的要求需要对业务逻辑进行重组(尽管是机械的)。事件 源将命令方法重构为两个或多个方法。第一种方法采用 command object 参数,该参数表示 请求,并确定需要执行哪些状态更改。它会验证其参数,并且不会更改 state 返回表示状态更改的事件列表。此方法通常会引发异常 如果无法执行该命令。

The requirement to generate events and apply them requires a restructuring—albeit mechanical—of the business logic. Event sourcing refactors a command method into two or more methods. The first method takes a command object parameter, which represents the request, and determines what state changes need to be performed. It validates its arguments, and without changing the state of the aggregate, returns a list of events representing the state changes. This method typically throws an exception if the command cannot be performed.

其他方法均采用特定事件类型作为参数并更新聚合。有以下方法之一 对于每个事件。请务必注意,这些方法不会失败,因为事件表示发生的状态更改。每种方法都会根据事件更新聚合。

The other methods each take a particular event type as a parameter and update the aggregate. There’s one of these methods for each event. It’s important to note that these methods can’t fail, because an event represents a state change that has happened. Each method updates the aggregate based on the event.

Eventuate Client 框架是一个事件溯源框架,在 Section 6.2.2 中有更详细的描述,它将这些方法命名为 .方法将 command 对象(包含 update 请求的参数)作为参数,并返回事件列表。 方法将事件作为参数并返回 void。聚合将定义这些方法的多个重载版本: 每个 Command 类一个方法,聚合发出的每个事件类型一个方法。图 6.5 显示了一个示例。process()apply()process()apply()process()apply()

The Eventuate Client framework, an event-sourcing framework described in more detail in section 6.2.2, names these methods process() and apply(). A process() method takes a command object, which contains the arguments of the update request, as a parameter and returns a list of events. An apply() method takes an event as a parameter and returns void. An aggregate will define multiple overloaded versions of these methods: one process() method for each command class and one apply() method for each event type emitted by the aggregate. Figure 6.5 shows an example.

图 6.5.事件溯源将更新聚合的方法拆分为一种方法,该方法接受命令并返回事件,以及一个或多个方法,该方法接受事件并更新聚合。process()apply()

在此示例中,方法被 method 和 an method 替换。该方法将命令作为参数。此命令类是通过对方法应用 Introduce Parameter Object 重构 (https://refactoring.com/catalog/introduceParameterObject.html) 来定义的。该方法要么返回一个事件,要么在修改为时已晚或建议的修订不符合最低订单时引发异常。事件的方法将 的状态更改为 。reviseOrder()process()apply()process()ReviseOrderreviseOrder()process()OrderRevisionProposedOrderapply()OrderRevisionProposedOrderREVISION_PENDING

In this example, the reviseOrder() method is replaced by a process() method and an apply() method. The process() method takes a ReviseOrder command as a parameter. This command class is defined by applying Introduce Parameter Object refactoring (https://refactoring.com/catalog/introduceParameterObject.html) to the reviseOrder() method. The process() method either returns an OrderRevisionProposed event, or throws an exception if it’s too late to revise the Order or if the proposed revision doesn’t meet the order minimum. The apply() method for the OrderRevisionProposed event changes the state of the Order to REVISION_PENDING.

使用以下步骤创建聚合:

An aggregate is created using the following steps:

  1. 使用其默认构造函数实例化聚合根。
  2. Instantiate aggregate root using its default constructor.
  3. Invoke 生成新事件。process()
  4. Invoke process() to generate the new events.
  5. 通过遍历新事件来更新聚合,将其 .apply()
  6. Update the aggregate by iterating through the new events, calling its apply().
  7. 将新事件保存在事件存储中。
  8. Save the new events in the event store.

使用以下步骤更新聚合:

An aggregate is updated using the following steps:

  1. 从事件存储中加载聚合的事件。
  2. Load aggregate’s events from the event store.
  3. 使用聚合根的默认构造函数实例化聚合根。
  4. Instantiate the aggregate root using its default constructor.
  5. 遍历加载的事件,调用聚合根。apply()
  6. Iterate through the loaded events, calling apply() on the aggregate root.
  7. 调用其方法以生成新事件。process()
  8. Invoke its process() method to generate new events.
  9. 通过循环访问新事件来更新聚合,调用 .apply()
  10. Update the aggregate by iterating through the new events, calling apply().
  11. 将新事件保存在事件存储中。
  12. Save the new events in the event store.

为了了解实际效果,现在让我们看看聚合的事件溯源版本。Order

To see this in action, let’s now look at the event sourcing version of the Order aggregate.

基于事件溯源的 Order 聚合

清单 6.1 显示了聚合的字段和负责创建它的方法。聚合的事件溯源版本与第 5 章中所示的基于 JPA 的版本有一些相似之处。它的字段几乎相同,并且它发出类似的事件。不同的是,它的业务逻辑是实现的 在处理发出事件的命令和应用这些事件方面,这会更新其状态。创建 或更新基于 JPA 的聚合(如 和 )在事件溯源版本中替换为 and 方法。OrderOrdercreateOrder()reviseOrder()process()apply()

Listing 6.1 shows the Order aggregate’s fields and the methods responsible for creating it. The event sourcing version of the Order aggregate has some similarities to the JPA-based version shown in chapter 5. Its fields are almost identical, and it emits similar events. What’s different is that its business logic is implemented in terms of processing commands that emit events and applying those events, which updates its state. Each method that creates or updates the JPA-based aggregate, such as createOrder() and reviseOrder(), is replaced in the event sourcing version by process() and apply() methods.

清单 6.1.用于初始化实例的聚合的字段及其方法Order
public class Order {

  private OrderState state;
  private Long consumerId;
  private Long restaurantId;
  private OrderLineItems orderLineItems;
  private DeliveryInformation deliveryInformation;
  private PaymentInformation paymentInformation;
  private Money orderMinimum;

  public Order() {
  }

  public List<Event> process(CreateOrderCommand command) {            1
     ... validate command ...
    return events(new OrderCreatedEvent(command.getOrderDetails()));
  }

  public void apply(OrderCreatedEvent event) {                        2
    OrderDetails orderDetails = event.getOrderDetails();
    this.orderLineItems = new OrderLineItems(orderDetails.getLineItems());
    this.orderMinimum = orderDetails.getOrderMinimum();
    this.state = APPROVAL_PENDING;
  }
public class Order {

  private OrderState state;
  private Long consumerId;
  private Long restaurantId;
  private OrderLineItems orderLineItems;
  private DeliveryInformation deliveryInformation;
  private PaymentInformation paymentInformation;
  private Money orderMinimum;

  public Order() {
  }

  public List<Event> process(CreateOrderCommand command) {            1
     ... validate command ...
    return events(new OrderCreatedEvent(command.getOrderDetails()));
  }

  public void apply(OrderCreatedEvent event) {                        2
    OrderDetails orderDetails = event.getOrderDetails();
    this.orderLineItems = new OrderLineItems(orderDetails.getLineItems());
    this.orderMinimum = orderDetails.getOrderMinimum();
    this.state = APPROVAL_PENDING;
  }

  • 1 验证命令并返回 OrderCreatedEvent
  • 1 Validates the command and returns an OrderCreatedEvent
  • 2 通过初始化 Order 的字段来应用 OrderCreatedEvent。
  • 2 Apply the OrderCreatedEvent by initializing the fields of the Order.

此类的字段类似于基于 JPA 的字段。唯一的区别是聚合的 S 不存储在聚合中。的方法完全不同。工厂方法已替换为 and 方法。该方法接受一个命令并发出一个事件。该方法采用 并初始化 .OrderidOrdercreateOrder()process()apply()process()CreateOrderOrderCreatedapply()OrderCreatedOrder

This class’s fields are similar to those of the JPA-based Order. The only difference is that the aggregate’s id isn’t stored in the aggregate. The Order’s methods are quite different. The createOrder() factory method has been replaced by process() and apply() methods. The process() method takes a CreateOrder command and emits an OrderCreated event. The apply() method takes the OrderCreated and initializes the fields of the Order.

现在,我们将查看用于修改订单的稍微复杂的业务逻辑。以前,此业务逻辑包括 三种方法:、 和 。事件溯源版本将这三种方法替换为 3 种方法和一些方法。以下清单显示了 和 的事件溯源版本。reviseOrder()confirmRevision()rejectRevision()process()apply()reviseOrder()confirmRevision()

We’ll now look at the slightly more complex business logic for revising an order. Previously this business logic consisted of three methods: reviseOrder(), confirmRevision(), and rejectRevision(). The event sourcing version replaces these three methods with three process() methods and some apply() methods. The following listing shows the event sourcing version of reviseOrder() and confirmRevision().

清单 6.2.修改聚合的 and 方法process()apply()Order
public class Order {

public List<Event> process(ReviseOrder command) {                          1
  OrderRevision orderRevision = command.getOrderRevision();
  switch (state) {
    case APPROVED:
      LineItemQuantityChange change =
              orderLineItems.lineItemQuantityChange(orderRevision);
      if (change.newOrderTotal.isGreaterThanOrEqual(orderMinimum)) {
        throw new OrderMinimumNotMetException();
      }
      return singletonList(new OrderRevisionProposed(orderRevision,
                            change.currentOrderTotal, change.newOrderTotal));

    default:
      throw new UnsupportedStateTransitionException(state);
  }
}

public void apply(OrderRevisionProposed event) {                           2
   this.state = REVISION_PENDING;
}

public List<Event> process(ConfirmReviseOrder command) {                   3
  OrderRevision orderRevision = command.getOrderRevision();
  switch (state) {
    case REVISION_PENDING:
      LineItemQuantityChange licd =
            orderLineItems.lineItemQuantityChange(orderRevision);
      return singletonList(new OrderRevised(orderRevision,
              licd.currentOrderTotal, licd.newOrderTotal));
    default:
      throw new UnsupportedStateTransitionException(state);
  }
}


public void apply(OrderRevised event) {                                    4
  OrderRevision orderRevision = event.getOrderRevision();
  if (!orderRevision.getRevisedLineItemQuantities().isEmpty()) {
    orderLineItems.updateLineItems(orderRevision);
  }
  this.state = APPROVED;
}
public class Order {

public List<Event> process(ReviseOrder command) {                          1
  OrderRevision orderRevision = command.getOrderRevision();
  switch (state) {
    case APPROVED:
      LineItemQuantityChange change =
              orderLineItems.lineItemQuantityChange(orderRevision);
      if (change.newOrderTotal.isGreaterThanOrEqual(orderMinimum)) {
        throw new OrderMinimumNotMetException();
      }
      return singletonList(new OrderRevisionProposed(orderRevision,
                            change.currentOrderTotal, change.newOrderTotal));

    default:
      throw new UnsupportedStateTransitionException(state);
  }
}

public void apply(OrderRevisionProposed event) {                           2
   this.state = REVISION_PENDING;
}

public List<Event> process(ConfirmReviseOrder command) {                   3
  OrderRevision orderRevision = command.getOrderRevision();
  switch (state) {
    case REVISION_PENDING:
      LineItemQuantityChange licd =
            orderLineItems.lineItemQuantityChange(orderRevision);
      return singletonList(new OrderRevised(orderRevision,
              licd.currentOrderTotal, licd.newOrderTotal));
    default:
      throw new UnsupportedStateTransitionException(state);
  }
}


public void apply(OrderRevised event) {                                    4
  OrderRevision orderRevision = event.getOrderRevision();
  if (!orderRevision.getRevisedLineItemQuantities().isEmpty()) {
    orderLineItems.updateLineItems(orderRevision);
  }
  this.state = APPROVED;
}

  • 1 验证订单是否可以修改,以及修改后的订单是否满足最低订单要求。
  • 1 Verify that the Order can be revised and that the revised order meets the order minimum.
  • 2 将 Order 的状态更改为 REVISION_PENDING。
  • 2 Change the state of the Order to REVISION_PENDING.
  • 3 验证是否可以确认修订并返回 OrderRevised 事件。
  • 3 Verify that the revision can be confirmed and return an OrderRevised event.
  • 4 修改顺序。
  • 4 Revise the Order.

如您所见,每个方法都已替换为一个方法和一个或多个方法。该方法已替换为 和 。同样, 已被 和 替换。process()apply()reviseOrder()process (ReviseOrder)apply(OrderRevisionProposed)confirmRevision()process(ConfirmReviseOrder)apply(OrderRevised)

As you can see, each method has been replaced by a process() method and one or more apply() methods. The reviseOrder() method has been replaced by process (ReviseOrder) and apply(OrderRevisionProposed). Similarly, confirmRevision() has been replaced by process(ConfirmReviseOrder) and apply(OrderRevised).

6.1.3. 使用乐观锁定处理并发更新

6.1.3. Handling concurrent updates using optimistic locking

两个或多个请求同时更新同一聚合的情况并不少见。使用传统 持久性通常使用乐观锁定来防止一个事务覆盖另一个事务的更改。乐观锁定通常使用 version 列来检测聚合在读取后是否发生了变化。应用程序映射聚合 root 设置为具有列的表,每当更新聚合时,该列都会递增。应用程序使用如下语句更新聚合:VERSIONUPDATE

It’s not uncommon for two or more requests to simultaneously update the same aggregate. An application that uses traditional persistence often uses optimistic locking to prevent one transaction from overwriting another’s changes. Optimistic locking typically uses a version column to detect whether an aggregate has changed since it was read. The application maps the aggregate root to a table that has a VERSION column, which is incremented whenever the aggregate is updated. The application updates the aggregate using an UPDATE statement like this:

UPDATE AGGREGATE_ROOT_TABLE
SET VERSION = VERSION + 1 ...
WHERE VERSION = <original version>
UPDATE AGGREGATE_ROOT_TABLE
SET VERSION = VERSION + 1 ...
WHERE VERSION = <original version>

仅当版本与应用程序读取聚合时相比没有变化时,此语句才会成功。如果两个事务 读取同一聚合,则更新聚合的第一个聚合将成功。第二个将失败,因为版本 number 已更改,因此它不会意外覆盖第一个事务的更改。UPDATE

This UPDATE statement will only succeed if the version is unchanged from when the application read the aggregate. If two transactions read the same aggregate, the first one that updates the aggregate will succeed. The second one will fail because the version number has changed, so it won’t accidentally overwrite the first transaction’s changes.

事件存储还可以使用乐观锁定来处理并发更新。每个聚合实例都有一个版本,该版本 与事件一起阅读。当应用程序插入事件时,事件存储会验证版本是否保持不变。一个 简单的方法是使用事件数作为版本号。或者,正如您将在下面的 6.2 节中看到的那样,事件存储可以维护一个明确的版本号。

An event store can also use optimistic locking to handle concurrent updates. Each aggregate instance has a version that’s read along with the events. When the application inserts events, the event store verifies that the version is unchanged. A simple approach is to use the number of events as the version number. Alternatively, as you’ll see below in section 6.2, an event store could maintain an explicit version number.

6.1.4. 事件溯源和发布事件

6.1.4. Event sourcing and publishing events

严格来说,事件溯源将聚合作为事件持久化,并从这些事件中重建聚合的当前状态 事件。您还可以将事件溯源用作可靠的事件发布机制。在事件存储中保存事件是一个 本质上是原子操作。我们需要实现一种机制,将所有持久化事件传递给感兴趣的使用者。

Strictly speaking, event sourcing persists aggregates as events and reconstructs the current state of an aggregate from those events. You can also use event sourcing as a reliable event publishing mechanism. Saving an event in the event store is an inherently atomic operation. We need to implement a mechanism to deliver all persisted events to interested consumers.

第 3 章介绍了几种不同的机制(轮询和事务日志尾部),用于发布插入的消息 作为事务的一部分放入数据库中。基于事件溯源的应用程序可以使用其中一种机制发布事件。 主要区别在于,它将事件永久存储在表中,而不是暂时将事件保存在表中,然后删除它们。让我们看一下每种方法,从轮询开始。EVENTSOUTBOX

Chapter 3 describes a couple of different mechanisms—polling and transaction log tailing—for publishing messages that are inserted into the database as part of a transaction. An event sourcing-based application can publish events using one of these mechanisms. The main difference is that it permanently stores events in an EVENTS table rather than temporarily saving events in an OUTBOX table and then deleting them. Let’s take a look at each approach, starting with polling.

使用轮询发布事件

如果事件存储在图 6.6 所示的表中,则事件发布者可以通过执行语句来轮询表中的新事件,并将事件发布到消息代理。挑战在于确定哪些活动是新的。例如,想象一下 单调递增。表面上吸引人的方法是让事件发布者记录它处理的最后一个时间。然后,它将使用如下所示的查询检索新事件:.EVENTSSELECTeventIdseventIdSELECT * FROM EVENTS where event_id > ? ORDER BY event_id ASC

If events are stored in the EVENTS table shown in figure 6.6, an event publisher can poll the table for new events by executing a SELECT statement and publish the events to a message broker. The challenge is determining which events are new. For example, imagine that eventIds are monotonically increasing. The superficially appealing approach is for the event publisher to record the last eventId that it has processed. It would then retrieve new events using a query like this: SELECT * FROM EVENTS where event_id > ? ORDER BY event_id ASC.

图 6.6.由于事务 A 在事务 B 之后提交而跳过事件的情况。轮询会看到,然后跳过 。eventId=1020eventId=1010

这种方法的问题在于事务可以按照与它们生成顺序不同的顺序提交 事件。因此,事件发布者可能会意外跳过事件。图 6.6 显示了一个场景。

The problem with this approach is that transactions can commit in an order that’s different from the order in which they generate events. As a result, the event publisher can accidentally skip over an event. Figure 6.6 shows such as a scenario.

在这种情况下,事务 A 插入一个 an 为 1010 的事件。接下来,事务 B 插入一个 an 为 1020 的事件,然后提交。如果事件发布者现在要查询该表,它将找到事件 1020。稍后,在事务 A 提交且事件 1010 可见后,事件发布者将忽略它。EVENT_IDEVENT_IDEVENTS

In this scenario, Transaction A inserts an event with an EVENT_ID of 1010. Next, transaction B inserts an event with an EVENT_ID of 1020 and then commits. If the event publisher were now to query the EVENTS table, it would find event 1020. Later on, after transaction A committed and event 1010 became visible, the event publisher would ignore it.

此问题的一种解决方案是向表中添加一个额外的列,用于跟踪事件是否已发布。然后,事件发布者将使用以下过程:EVENTS

One solution to this problem is to add an extra column to the EVENTS table that tracks whether an event has been published. The event publisher would then use the following process:

  1. 通过执行以下 SELECT 语句来查找未发布的事件: .SELECT * FROM EVENTS where PUBLISHED = 0 ORDER BY event_id ASC
  2. Find unpublished events by executing this SELECT statement: SELECT * FROM EVENTS where PUBLISHED = 0 ORDER BY event_id ASC.
  3. 将事件发布到消息代理。
  4. Publish events to the message broker.
  5. 将事件标记为已发布: 。UPDATE EVENTS SET PUBLISHED = 1 WHERE EVENT_ID in
  6. Mark the events as having been published: UPDATE EVENTS SET PUBLISHED = 1 WHERE EVENT_ID in.

此方法可防止事件发布者跳过事件。

This approach prevents the event publisher from skipping events.

使用事务日志拖尾可靠地发布事件

更复杂的事件存储使用事务日志拖尾,如第 3 章所述,它保证了事件将被发布,并且性能更高,可扩展性更高。例如,Eventuate Local、 开源事件存储使用此方法。它从数据库事务日志中读取插入到表中的事件,并将其发布到消息代理。Section 6.2 更详细地讨论了 Eventuate Local 是如何工作的。EVENTS

More sophisticated event stores use transaction log tailing, which, as chapter 3 describes, guarantees that events will be published and is also more performant and scalable. For example, Eventuate Local, an open source event store, uses this approach. It reads events inserted into an EVENTS table from the database transaction log and publishes them to the message broker. Section 6.2 discusses how Eventuate Local works in more detail.

6.1.5. 使用快照提高性能

6.1.5. Using snapshots to improve performance

聚合的状态转换相对较少,因此它只有少量事件。查询事件效率高 store 并重建聚合。但是,长期聚合可以包含大量事件。例如,聚合可能具有大量事件。随着时间的推移,加载和折叠的效率会越来越低 那些事件。OrderOrderAccount

An Order aggregate has relatively few state transitions, so it only has a small number of events. It’s efficient to query the event store for those events and reconstruct an Order aggregate. Long-lived aggregates, though, can have a large number of events. For example, an Account aggregate potentially has a large number of events. Over time, it would become increasingly inefficient to load and fold those events.

一种常见的解决方案是定期保留聚合状态的快照。图 6.7 显示了使用快照的示例。应用程序通过加载最新的快照来还原聚合的状态 以及自创建快照以来发生的那些事件。

A common solution is to periodically persist a snapshot of the aggregate’s state. Figure 6.7 shows an example of using a snapshot. The application restores the state of an aggregate by loading the most recent snapshot and only those events that have occurred since the snapshot was created.

图 6.7.使用快照无需加载所有事件,从而提高性能。应用程序只需要加载快照 以及之后发生的事件。

在此示例中,快照版本为 N。应用程序只需加载快照及其后面的两个事件,即可还原聚合的状态。 前 N 个事件不会从事件存储区加载。

In this example, the snapshot version is N. The application only needs to load the snapshot and the two events that follow it in order to restore the state of the aggregate. The previous N events are not loaded from the event store.

从快照还原聚合的状态时,应用程序首先从快照创建聚合实例 然后迭代事件,应用它们。例如,第 6.2.2 节中描述的 Eventuate Client 框架使用类似于以下内容的代码来重建聚合:

When restoring the state of an aggregate from a snapshot, an application first creates an aggregate instance from the snapshot and then iterates through the events, applying them. For example, the Eventuate Client framework, described in section 6.2.2, uses code similar to the following to reconstruct an aggregate:

Class aggregateClass = ...;
Snapshot snapshot = ...;
Aggregate aggregate = recreateFromSnapshot(aggregateClass, snapshot);
for (Event event : events) {
  aggregate = aggregate.applyEvent(event);
}
// use aggregate...
Class aggregateClass = ...;
Snapshot snapshot = ...;
Aggregate aggregate = recreateFromSnapshot(aggregateClass, snapshot);
for (Event event : events) {
  aggregate = aggregate.applyEvent(event);
}
// use aggregate...

使用快照时,聚合实例是从快照重新创建的,而不是使用其默认构造函数创建。 如果聚合具有简单、易于序列化的结构,则快照可以是其 JSON 序列化等。更多 可以使用 Memento 模式 (https://en.wikipedia.org/wiki/Memento_pattern) 对复杂聚合进行快照。

When using snapshots, the aggregate instance is recreated from the snapshot instead of being created using its default constructor. If an aggregate has a simple, easily serializable structure, the snapshot can be, for example, its JSON serialization. More complex aggregates can be snapshotted using the Memento pattern (https://en.wikipedia.org/wiki/Memento_pattern).

在线商店示例中的聚合具有非常简单的结构:客户的信息、他们的信用额度以及他们的 积分预留。a 的快照是其状态的 JSON 序列化。图 6.8 显示了如何从与事件 #103 的状态相对应的快照重新创建 。需要加载快照和事件 #103 之后发生的事件。CustomerCustomerCustomerCustomerCustomer Service

The Customer aggregate in the online store example has a very simple structure: the customer’s information, their credit limit, and their credit reservations. A snapshot of a Customer is the JSON serialization of its state. Figure 6.8 shows how to recreate a Customer from a snapshot corresponding to the state of a Customer as of event #103. The Customer Service needs to load the snapshot and the events that have occurred after event #103.

图 6.8.通过反序列化快照的 JSON,然后加载并应用事件 #104 到 #106 来重新创建。Customer ServiceCustomer

通过反序列化快照的 JSON,然后加载并应用事件 #104 到 #106 来重新创建。Customer ServiceCustomer

The Customer Service recreates the Customer by deserializing the snapshot’s JSON and then loading and applying events #104 through #106.

6.1.6. 幂等消息处理

6.1.6. Idempotent message processing

服务通常使用来自其他应用程序或其他服务的消息。例如,服务可能会使用域事件 由 Saga Orchestrator 发送的聚合或命令消息发布。如第 3 章所述,在开发消息使用者时,一个重要的问题是确保它是幂等的,因为消息代理可能会传递 同一条消息多次出现。

Services often consume messages from other applications or other services. A service might, for example, consume domain events published by aggregates or command messages sent by a saga orchestrator. As described in chapter 3, an important issue when developing a message consumer is ensuring that it’s idempotent, because a message broker might deliver the same message multiple times.

如果消息使用者可以安全地多次使用同一消息调用,则该消息使用者是幂等的。Eventuate Tram 框架 例如,通过检测和丢弃重复消息来实现幂等消息处理。它将已处理消息的 ID 记录在表中,作为业务逻辑用于创建或更新聚合的本地 ACID 事务的一部分。如果消息的 ID 在表中,则它是重复项,可以丢弃。基于事件溯源的业务逻辑必须实现等效机制。 如何执行此操作取决于事件存储是使用 RDBMS 还是 NoSQL 数据库。PROCESSED_MESSAGESPROCESSED_MESSAGES

A message consumer is idempotent if it can safely be invoked with the same message multiple times. The Eventuate Tram framework, for example, implements idempotent message handling by detecting and discarding duplicate messages. It records the ids of processed messages in a PROCESSED_MESSAGES table as part of the local ACID transaction used by the business logic to create or update aggregates. If the ID of a message is in the PROCESSED_MESSAGES table, it’s a duplicate and can be discarded. Event sourcing-based business logic must implement an equivalent mechanism. How this is done depends on whether the event store uses an RDBMS or a NoSQL database.

使用基于 RDBMS 的事件存储进行幂等消息处理

如果应用程序使用基于 RDBMS 的事件存储,它可以使用相同的方法来检测和丢弃重复的消息。 它将消息 ID 作为将事件插入到表中的事务的一部分插入到表中。PROCESSED_MESSAGESEVENTS

If an application uses an RDBMS-based event store, it can use an identical approach to detect and discard duplicates messages. It inserts the message ID into the PROCESSED_MESSAGES table as part of the transaction that inserts events into the EVENTS table.

使用基于 NoSQL 的事件存储时的幂等消息处理

基于 NoSQL 的事件存储具有有限的事务模型,必须使用不同的机制来实现幂等消息 处理。消息使用者必须以某种方式原子地持久保存事件并记录消息 ID。幸运的是,有一个简单的 溶液。消息使用者将消息的 ID 存储在处理消息时生成的事件中。它检测重复项 验证聚合的所有事件都不包含消息 ID。

A NoSQL-based event store, which has a limited transaction model, must use a different mechanism to implement idempotent message handling. A message consumer must somehow atomically persist events and record the message ID. Fortunately, there’s a simple solution. A message consumer stores the message’s ID in the events that are generated while processing it. It detects duplicates by verifying that none of an aggregate’s events contains the message ID.

使用此方法的一个挑战是处理消息可能不会生成任何事件。缺乏事件意味着 没有消息已处理的记录。后续可能会重新传递和重新处理同一消息 不正确的行为。例如,请考虑以下方案:

One challenge with using this approach is that processing a message might not generate any events. The lack of events means there’s no record of a message having been processed. A subsequent redelivery and reprocessing of the same message might result in incorrect behavior. For example, consider the following scenario:

  1. 消息 A 已处理,但不会更新聚合。
  2. Message A is processed but doesn’t update an aggregate.
  3. 消息 B 被处理,消息使用者更新聚合。
  4. Message B is processed, and the message consumer updates the aggregate.
  5. 消息 A 被重新传送,并且由于没有处理它的记录,因此消息使用者会更新聚合。
  6. Message A is redelivered, and because there’s no record of it having been processed, the message consumer updates the aggregate.
  7. 再次处理消息 B。
  8. Message B is processed again....

在这种情况下,事件的重新传递会导致不同的结果,并且可能是错误的结果。

In this scenario, the redelivery of events results in a different and possibly erroneous outcome.

避免此问题的一种方法是始终发布事件。如果聚合没有发出事件,则应用程序会将 事件使用者必须忽略这些伪事件。

One way to avoid this problem is to always publish an event. If an aggregate doesn’t emit an event, an application saves a pseudo event solely to record the message ID. Event consumers must ignore these pseudo events.

6.1.7. 不断发展的域事件

6.1.7. Evolving domain events

事件溯源,至少从概念上讲,永远存储事件,这是一把双刃剑。一方面,它提供了应用程序 使用保证准确的更改审核日志。它还使应用程序能够重建历史 聚合的状态。另一方面,它带来了挑战,因为事件的结构通常会随着时间的推移而变化。

Event sourcing, at least conceptually, stores events forever—which is a double-edged sword. On one hand, it provides the application with an audit log of changes that’s guaranteed to be accurate. It also enables an application to reconstruct the historical state of an aggregate. On the other hand, it creates a challenge, because the structure of events often changes over time.

应用程序可能必须处理事件的多个版本。例如,加载聚合的服务可能需要折叠事件的多个版本。同样,事件订阅者可能会看到 多个版本。Order

An application must potentially deal with multiple versions of events. For example, a service that loads an Order aggregate could potentially need to fold multiple versions of events. Similarly, an event subscriber might potentially see multiple versions.

让我们首先看一下事件可以更改的不同方式,然后我将介绍一种常用的处理方法 变化。

Let’s first look at the different ways that events can change, and then I’ll describe a commonly used approach for handling changes.

事件架构演变

从概念上讲,事件溯源应用程序具有一个架构,该架构分为三个级别:

Conceptually, an event sourcing application has a schema that’s organized into three levels:

  • 由一个或多个聚合组成
  • Consists of one or more aggregates
  • 定义每个聚合发出的事件
  • Defines the events that each aggregate emits
  • 定义事件的结构
  • Defines the structure of the events

表 6.1 显示了每个级别可能发生的不同类型的更改。

Table 6.1 shows the different types of changes that can occur at each level.

表 6.1.应用程序事件演变的不同方式

水平

Level

改变

Change

向后兼容

Backward compatible

图式 定义新的聚合类型 是的
删除聚合 删除现有聚合
重命名聚合 更改聚合类型的名称
骨料 添加新的事件类型 是的
删除事件 删除事件类型
重命名事件 更改事件类型的名称
事件 添加新字段 是的
删除字段 删除字段
重命名字段 重命名字段
更改字段类型 更改字段的类型

随着服务的域模型随时间推移而发展,这些变化自然而然地发生,例如,当服务的需求发生变化时 或者随着开发人员更深入地了解域并改进域模型。在 schema 级别,开发人员添加了: remove 和 rename 聚合类。在聚合级别,特定聚合发出的事件类型可以更改。 开发人员可以通过添加、删除和更改字段的名称或类型来更改事件类型的结构。

These changes occur naturally as a service’s domain model evolves over time—for example, when a service’s requirements change or as its developers gain deeper insight into a domain and improve the domain model. At the schema level, developers add, remove, and rename aggregate classes. At the aggregate level, the types of events emitted by a particular aggregate can change. Developers can change the structure of an event type by adding, removing, and changing the name or type of a field.

幸运的是,其中许多类型的更改都是向后兼容的更改。例如,向事件添加字段是不可能的 影响消费者。消费者忽略未知字段。但是,其他更改并不向后兼容。例如,将 事件名称或字段名称要求更改该事件类型的使用者。

Fortunately, many of these types of changes are backward-compatible changes. For example, adding a field to an event is unlikely to impact consumers. A consumer ignores unknown fields. Other changes, though, aren’t backward compatible. For example, changing the name of an event or the name of a field requires consumers of that event type to be changed.

通过向上转换管理 Schema 更改

在 SQL 数据库环境中,通常使用架构迁移来处理对数据库架构的更改。每个 schema 更改都是 由迁移表示,迁移是一个 SQL 脚本,用于更改架构并将数据迁移到新架构。架构迁移存储在一个版本 control 系统,并使用 Flyway 等工具应用于数据库。

In the SQL database world, changes to a database schema are commonly handled using schema migrations. Each schema change is represented by a migration, a SQL script that changes the schema and migrates the data to a new schema. The schema migrations are stored in a version control system and applied to a database using a tool such as Flyway.

事件溯源应用程序可以使用类似的方法来处理不向后兼容的更改。但与其迁移 事件原位转换为新架构版本,事件溯源框架在从事件存储加载事件时转换事件。 通常称为 upcaster 的组件将单个事件从旧版本更新到新版本。因此,应用程序代码只处理 当前事件架构。

An event sourcing application can use a similar approach to handle non-backward-compatible changes. But instead of migrating events to the new schema version in situ, event sourcing frameworks transform events when they’re loaded from the event store. A component commonly called an upcaster updates individual events from an old version to a newer version. As a result, the application code only ever deals with the current event schema.

现在我们已经了解了事件溯源的工作原理,让我们考虑一下它的优点和缺点。

Now that we’ve looked at how event sourcing works, let’s consider its benefits and drawbacks.

6.1.8. 事件溯源的好处

6.1.8. Benefits of event sourcing

事件溯源既有优点也有缺点。好处包括:

Event sourcing has both benefits and drawbacks. The benefits include the following:

  • 可靠地发布域事件
  • Reliably publishes domain events
  • 保留聚合的历史记录
  • Preserves the history of aggregates
  • 主要避免了 O/R 阻抗不匹配问题
  • Mostly avoids the O/R impedance mismatch problem
  • 为开发人员提供时间机器
  • Provides developers with a time machine

让我们更详细地研究每个好处。

Let’s examine each benefit in more detail.

可靠地发布域事件

事件溯源的一个主要好处是,每当聚合的状态发生变化时,它都会可靠地发布事件。那是 事件驱动型微服务架构的良好基础。另外,因为每个事件都可以存储用户的身份 更改者,事件溯源会提供保证准确的审核日志。可以使用 event 流 用于各种其他目的,包括通知用户、应用程序集成、分析和监控。

A major benefit of event sourcing is that it reliably publishes events whenever the state of an aggregate changes. That’s a good foundation for an event-driven microservice architecture. Also, because each event can store the identity of the user who made the change, event sourcing provides an audit log that’s guaranteed to be accurate. The stream of events can be used for a variety of other purposes, including notifying users, application integration, analytics, and monitoring.

保留聚合的历史记录

事件溯源的另一个好处是,它存储了每个聚合的整个历史记录。您可以轻松实现 temporal 检索聚合的过去状态的查询。要确定聚合在给定时间点的状态,您需要 折叠到该点之前发生的事件。例如,计算 过去某个时间点的客户。

Another benefit of event sourcing is that it stores the entire history of each aggregate. You can easily implement temporal queries that retrieve the past state of an aggregate. To determine the state of an aggregate at a given point in time, you fold the events that occurred up until that point. It’s straightforward, for example, to calculate the available credit of a customer at some point in the past.

主要避免了 O/R 阻抗不匹配问题

事件溯源会保留事件,而不是聚合事件。事件通常具有简单、易于序列化的结构。 如前所述,服务可以通过序列化其状态的 memento 来对复杂聚合进行快照,这会增加一个级别 of in间接。

Event sourcing persists events rather than aggregating them. Events typically have a simple, easily serializable structure. As mentioned earlier, a service can snapshot a complex aggregate by serializing a memento of its state, which adds a level of indirection between an aggregate and its serialized representation.

为开发人员提供时间机器

事件溯源存储应用程序生命周期内发生的所有事情的历史记录。想象一下 FTGO 开发人员 需要对将商品添加到购物车,然后将其删除的客户实施新要求。传统的 应用程序不会保留此信息,因此只能向在该功能后添加和删除项目的客户进行营销 已实施。相比之下,基于事件溯源的应用程序可以立即向在 往事。就好像事件溯源为开发人员提供了一台时光机,让他们可以穿越到过去并实现意想不到的场景 要求。

Event sourcing stores a history of everything that’s happened in the lifetime of an application. Imagine that the FTGO developers need to implement a new requirement to customers who added an item to their shopping cart and then removed it. A traditional application wouldn’t preserve this information, so could only market to customers who add and remove items after the feature is implemented. In contrast, an event sourcing-based application can immediately market to customers who have done this in the past. It’s as if event sourcing provides developers with a time machine for traveling to the past and implementing unanticipated requirements.

6.1.9. 事件溯源的缺点

6.1.9. Drawbacks of event sourcing

事件溯源不是灵丹妙药。它有以下缺点:

Event sourcing isn’t a silver bullet. It has the following drawbacks:

  • 它有一个不同的编程模型,有一个学习曲线。
  • It has a different programming model that has a learning curve.
  • 它具有基于消息传递的应用程序的复杂性。
  • It has the complexity of a messaging-based application.
  • 不断发展的事件可能很棘手。
  • Evolving events can be tricky.
  • 删除数据很棘手。
  • Deleting data is tricky.
  • 查询事件存储具有挑战性。
  • Querying the event store is challenging.

让我们看看每个缺点。

Let’s look at each drawback.

具有学习曲线的不同编程模型

这是一个不同且不熟悉的编程模型,这意味着一个学习曲线。为了让现有应用程序 使用事件溯源时,您必须重写其业务逻辑。幸运的是,这是一个相当机械的转变,你可以 do 在将应用程序迁移到微服务时执行。

It’s a different and unfamiliar programming model, and that means a learning curve. In order for an existing application to use event sourcing, you must rewrite its business logic. Fortunately, that’s a fairly mechanical transformation that you can do when you migrate your application to microservices.

基于消息传递的应用程序的复杂性

事件溯源的另一个缺点是消息代理通常保证至少传递一次。不是的事件处理程序 幂等必须检测并丢弃重复事件。事件溯源框架可以通过单调地为每个事件分配一个 增加 ID。然后,事件处理程序可以通过跟踪最高可见的事件 ID 来检测重复事件。这种情况甚至会发生 当事件处理程序更新聚合时自动更新。

Another drawback of event sourcing is that message brokers usually guarantee at-least-once delivery. Event handlers that aren’t idempotent must detect and discard duplicate events. The event sourcing framework can help by assigning each event a monotonically increasing ID. An event handler can then detect duplicate events by tracking the highest-seen event ID. This even happens automatically when event handlers update aggregates.

不断变化的事件可能很棘手

使用事件溯源,事件(和快照)的架构将随着时间的推移而发展。由于事件是永久存储的,因此聚合 可能需要折叠与多个架构版本对应的事件。聚合体可能会变成 代码臃肿,无法处理所有不同的版本。如 6.1.7 节所述,此问题的一个好解决方案是在从事件存储加载事件时将事件升级到最新版本。这 方法将升级事件的代码与聚合分开,从而简化了聚合,因为它们只需要 以应用事件的最新版本。

With event sourcing, the schema of events (and snapshots!) will evolve over time. Because events are stored forever, aggregates potentially need to fold events corresponding to multiple schema versions. There’s a real risk that aggregates may become bloated with code to deal with all the different versions. As mentioned in section 6.1.7, a good solution to this problem is to upgrade events to the latest version when they’re loaded from the event store. This approach separates the code that upgrades events from the aggregate, which simplifies the aggregates because they only need to apply the latest version of the events.

删除数据很棘手

由于事件溯源的目标之一是保留聚合的历史记录,因此它会有意永久存储数据。 使用事件溯源时删除数据的传统方法是执行软删除。应用程序通过以下方式删除聚合 设置 deleted 标志。聚合通常会发出一个事件,该事件会通知任何感兴趣的使用者。任何访问该聚合的代码都可以检查标志并采取相应的行动。Deleted

Because one of the goals of event sourcing is to preserve the history of aggregates, it intentionally stores data forever. The traditional way to delete data when using event sourcing is to do a soft delete. An application deletes an aggregate by setting a deleted flag. The aggregate will typically emit a Deleted event, which notifies any interested consumers. Any code that accesses that aggregate can check the flag and act accordingly.

使用软删除适用于多种类型的数据。然而,一个挑战是遵守《通用数据保护》 法规 (GDPR),一项授予个人擦除权 (https://gdpr-info.eu/art-17-gdpr/) 的欧洲数据保护和隐私法规。应用程序必须能够忘记用户的个人信息,例如他们的电子邮件地址。的问题 基于事件溯源的应用程序是指电子邮件地址可以存储在事件中,也可以用作聚合的主键。应用程序必须以某种方式忘记用户,而不会删除 事件。AccountCreated

Using a soft delete works well for many kinds of data. One challenge, however, is complying with the General Data Protection Regulation (GDPR), a European data protection and privacy regulation that grants individuals the right to erasure (https://gdpr-info.eu/art-17-gdpr/). An application must have the ability to forget a user’s personal information, such as their email address. The issue with an event sourcing-based application is that the email address might either be stored in an AccountCreated event or used as the primary key of an aggregate. The application somehow must forget about the user without deleting the events.

加密是可用于解决此问题的一种机制。每个用户都有一个加密密钥,该密钥存储在单独的 database 表。应用程序使用该加密密钥对包含用户个人信息的任何事件进行加密,然后再将其存储在事件存储中。什么时候 用户请求擦除,应用程序将从 Database 表中删除加密密钥记录。用户的个人 信息被有效地删除,因为事件无法再解密。

Encryption is one mechanism you can use to solve this problem. Each user has an encryption key, which is stored in a separate database table. The application uses that encryption key to encrypt any events containing the user’s personal information before storing them in an event store. When a user requests to be erased, the application deletes the encryption key record from the database table. The user’s personal information is effectively deleted, because the events can no longer be decrypted.

加密事件可以解决擦除用户个人信息的大多数问题。但是,如果用户个人的某些方面 信息(例如电子邮件地址)用作聚合 ID,丢弃加密密钥可能还不够。为 例如,Section 6.2 描述了一个事件存储,该 Event Store 有一个 table ,其主键是聚合 ID。此问题的一种解决方案是使用假名化技术,将电子邮件地址替换为 UUID 令牌,并将其用作聚合 ID。应用程序存储关联 在 UUID 令牌和数据库表中的电子邮件地址之间。当用户请求擦除时,应用程序会删除 该表中其电子邮件地址的行。这可以防止应用程序将 UUID 映射回电子邮件地址。entities

Encrypting events solves most problems with erasing a user’s personal information. But if some aspect of a user’s personal information, such as email address, is used as an aggregate ID, throwing away the encryption key may not be sufficient. For example, section 6.2 describes an event store that has an entities table whose primary key is the aggregate ID. One solution to this problem is to use the technique of pseudonymization, replacing the email address with a UUID token and using that as the aggregate ID. The application stores the association between the UUID token and the email address in a database table. When a user requests to be erased, the application deletes the row for their email address from that table. This prevents the application from mapping the UUID back to the email address.

查询事件存储具有挑战性

想象一下,您需要找到已用完信用额度的客户。因为没有包含信用的列,所以 你不能写 .相反,您必须使用更复杂且可能效率低下的查询,该查询具有嵌套,通过折叠设置初始信用额度的事件并进行调整来计算信用额度。更糟糕的是,基于 NoSQL 的 事件存储通常仅支持基于主键的查找。因此,您必须使用 CQRS 实现查询 方法在第 7 章中描述。SELECT * FROM CUSTOMER WHERE CREDIT_LIMIT = 0SELECT

Imagine you need to find customers who have exhausted their credit limit. Because there isn’t a column containing the credit, you can’t write SELECT * FROM CUSTOMER WHERE CREDIT_LIMIT = 0. Instead, you must use a more complex and potentially inefficient query that has a nested SELECT to compute the credit limit by folding events that set the initial credit and adjusting it. To make matters worse, a NoSQL-based event store will typically only support primary key-based lookup. Consequently, you must implement queries using the CQRS approach described in chapter 7.

6.2. 实现事件存储

6.2. Implementing an event store

使用事件溯源的应用程序将其事件存储在事件存储中。事件存储是数据库和消息代理的混合体。它的行为类似于数据库,因为它具有用于插入和检索的 API 按主键划分的聚合事件。它的行为类似于消息代理,因为它具有用于订阅事件的 API。

An application that uses event sourcing stores its events in an event store. An event store is a hybrid of a database and a message broker. It behaves as a database because it has an API for inserting and retrieving an aggregate’s events by primary key. And it behaves as a message broker because it has an API for subscribing to events.

有几种不同的方法可以实现事件存储。一种选择是实现您自己的事件存储和事件溯源 框架。例如,您可以在 RDBMS 中持久保存事件。发布事件的一种简单但性能较低的方法是 subscribers 轮询表中的事件。但是,如 Section 6.1.4 中所述,一个挑战是确保订阅者按顺序处理所有事件。EVENTS

There are a few different ways to implement an event store. One option is to implement your own event store and event sourcing framework. You can, for example, persist events in an RDBMS. A simple, albeit low-performance, way to publish events is for subscribers to poll the EVENTS table for events. But, as noted in section 6.1.4, one challenge is ensuring that a subscriber processes all events in order.

另一种选择是使用特殊用途的事件存储,它通常提供一组丰富的功能和更好的性能 和可扩展性。有以下几种可供选择:

Another option is to use a special-purpose event store, which typically provides a rich set of features and better performance and scalability. There are several of these to chose from:

  • Event Store一个。由 https://eventstore.org 事件溯源先驱 Greg Young 开发的基于 NET 的开源事件存储。
  • Event StoreA .NET-based open source event store developed by Greg Young, an event sourcing pioneer (https://eventstore.org).
  • 拉戈姆由 Lightbend(前身为 Typesafe (www.lightbend.com/lagom-framework) 公司)开发的微服务框架。
  • LagomA microservices framework developed by Lightbend, the company formerly known as Typesafe (www.lightbend.com/lagom-framework).
  • 轴突一个开源 Java 框架,用于开发使用事件溯源和 CQRS 的事件驱动应用程序 (www.axonframework.org)。
  • AxonAn open source Java framework for developing event-driven applications that use event sourcing and CQRS (www.axonframework.org).
  • Eventuate由我的初创公司 Eventuate (http://eventuate.io) 开发。Eventuate 有两个版本:Eventuate SaaS(一种云服务)和 Eventuate Local(一种基于 Apache Kafka/RDBMS 的服务) 开源项目。
  • EventuateDeveloped by my startup, Eventuate (http://eventuate.io). There are two versions of Eventuate: Eventuate SaaS, a cloud service, and Eventuate Local, an Apache Kafka/RDBMS-based open source project.

尽管这些框架在细节上有所不同,但核心概念保持不变。因为 Eventuate 是我的框架 最熟悉的,就是我在这里介绍的那个。它有一个简单易懂的架构,可以说明 事件溯源概念。您可以在应用程序中使用它,自己重新实现概念,或应用您在此处学到的知识 使用其他事件溯源框架之一构建应用程序。

Although these frameworks differ in the details, the core concepts remain the same. Because Eventuate is the framework I’m most familiar with, that’s the one I cover here. It has a straightforward, easy-to-understand architecture that illustrates event sourcing concepts. You can use it in your applications, reimplement the concepts yourself, or apply what you learn here to build applications with one of the other event sourcing frameworks.

在以下各节中,我将介绍 Eventuate Local 事件存储的工作原理。然后,我描述 Eventuate 客户端 framework for Java,一个易于使用的框架,用于编写基于事件溯源的业务逻辑,该逻辑使用 Eventuate Local 事件存储。

I begin the following sections by describing how the Eventuate Local event store works. Then I describe the Eventuate Client framework for Java, an easy-to-use framework for writing event sourcing-based business logic that uses the Eventuate Local event store.

6.2.1. Eventuate Local 事件存储如何工作

6.2.1. How the Eventuate Local event store works

Eventuate Local 是一个开源事件存储。图 6.9 显示了体系结构。事件存储在数据库(如 MySQL)中。应用程序插入和检索聚合事件 按主键。应用程序使用来自消息代理(如 Apache Kafka)的事件。事务日志尾部机制将事件从数据库传播到消息代理。

Eventuate Local is an open source event store. Figure 6.9 shows the architecture. Events are stored in a database, such as MySQL. Applications insert and retrieve aggregate events by primary key. Applications consume events from a message broker, such as Apache Kafka. A transaction log tailing mechanism propagates events from the database to the message broker.

图 6.9.Eventuate Local 的架构。它由存储事件的事件数据库(例如 MySQL)、事件代理 (如 Apache Kafka)向订阅者传递事件,以及发布存储在事件数据库中的事件的事件中继 添加到 Event Broker。

让我们看看不同的 Eventuate Local 组件,从数据库架构开始。

Let’s look at the different Eventuate Local components, starting with the database schema.

Eventuate Local 事件数据库的架构

事件数据库由三个表组成:

The event database consists of three tables:

  • 事件存储事件
  • eventsStores the events
  • 实体 - 每个实体一行
  • entitiesOne row per entity
  • 快照 - 存储快照
  • snapshotsStores snapshots

中央表格就是表格。此表的结构与图 6.2 中所示的表非常相似。这是它的定义:events

The central table is the events table. The structure of this table is very similar to the table shown in figure 6.2. Here’s its definition:

create table events (
  event_id varchar(1000) PRIMARY KEY,
  event_type varchar(1000),
  event_data varchar(1000) NOT NULL,
  entity_type VARCHAR(1000) NOT NULL,
  entity_id VARCHAR(1000) NOT NULL,
  triggering_event VARCHAR(1000)
);
create table events (
  event_id varchar(1000) PRIMARY KEY,
  event_type varchar(1000),
  event_data varchar(1000) NOT NULL,
  entity_type VARCHAR(1000) NOT NULL,
  entity_id VARCHAR(1000) NOT NULL,
  triggering_event VARCHAR(1000)
);

该列用于检测重复的事件/消息。它存储消息/事件的 ID,其处理生成了此 事件。triggering_event

The triggering_event column is used to detect duplicate events/messages. It stores the ID of the message/event whose processing generated this event.

该表存储每个实体的当前版本。它用于实现乐观锁定。这是它的定义 桌子:entities

The entities table stores the current version of each entity. It’s used to implement optimistic locking. Here’s the definition of this table:

create table entities (
  entity_type VARCHAR(1000),
  entity_id VARCHAR(1000),
  entity_version VARCHAR(1000) NOT NULL,
  PRIMARY KEY(entity_type, entity_id)
);
create table entities (
  entity_type VARCHAR(1000),
  entity_id VARCHAR(1000),
  entity_version VARCHAR(1000) NOT NULL,
  PRIMARY KEY(entity_type, entity_id)
);

创建实体时,将在此表中插入一行。每次更新实体时,都会更新该列。entity_version

When an entity is created, a row is inserted into this table. Each time an entity is updated, the entity_version column is updated.

该表存储每个实体的快照。以下是此表的定义:snapshots

The snapshots table stores the snapshots of each entity. Here’s the definition of this table:

create table snapshots (
  entity_type VARCHAR(1000),
  entity_id VARCHAR(1000),
  entity_version VARCHAR(1000),
  snapshot_type VARCHAR(1000) NOT NULL,
  snapshot_json VARCHAR(1000) NOT NULL,
  triggering_events VARCHAR(1000),
  PRIMARY KEY(entity_type, entity_id, entity_version)
)
create table snapshots (
  entity_type VARCHAR(1000),
  entity_id VARCHAR(1000),
  entity_version VARCHAR(1000),
  snapshot_type VARCHAR(1000) NOT NULL,
  snapshot_json VARCHAR(1000) NOT NULL,
  triggering_events VARCHAR(1000),
  PRIMARY KEY(entity_type, entity_id, entity_version)
)

和 列指定快照的实体。column 是快照的序列化表示形式,而 是其类型。该 指定此快照的实体的版本。entity_typeentity_idsnapshot_jsonsnapshot_typeentity_version

The entity_type and entity_id columns specify the snapshot’s entity. The snapshot_json column is the serialized representation of the snapshot, and the snapshot_type is its type. The entity_version specifies the version of the entity that this is a snapshot of.

此架构支持的三个操作是 、 和 。该操作查询表以检索最新快照(如果有)。如果存在快照,该操作将查询表以查找其大于快照 .否则,检索指定实体的所有事件。该操作还会查询表以检索实体的当前版本。find()create()update()find()snapshotsfind()eventsevent_identity_versionfind()find()entity

The three operations supported by this schema are find(), create(), and update(). The find() operation queries the snapshots table to retrieve the latest snapshot, if any. If a snapshot exists, the find() operation queries the events table to find all events whose event_id is greater than the snapshot’s entity_version. Otherwise, find() retrieves all events for the specified entity. The find() operation also queries the entity table to retrieve the entity’s current version.

该操作在表中插入一行,并将事件插入到表中。该操作将事件插入到表中。它还通过使用以下语句更新表中的实体版本来执行乐观锁定检查:create()entityeventsupdate()eventsentitiesUPDATE

The create() operation inserts a row into the entity table and inserts the events into the events table. The update() operation inserts events into the events table. It also performs an optimistic locking check by updating the entity version in the entities table using this UPDATE statement:

UPDATE entities SET entity_version = ?
WHERE entity_type = ? and entity_id = ? and entity_version = ?
UPDATE entities SET entity_version = ?
WHERE entity_type = ? and entity_id = ? and entity_version = ?

此语句验证版本是否未更改,因为该版本已由操作检索。它还会将 更新到新版本。该操作在事务中执行这些更新,以确保原子性。find()entity_versionupdate()

This statement verifies that the version is unchanged since it was retrieved by the find() operation. It also updates the entity_version to the new version. The update() operation performs these updates within a transaction in order to ensure atomicity.

现在我们已经了解了 Eventuate Local 如何存储聚合的事件和快照,让我们看看客户端如何订阅 到 Eventuate Local 的事件代理的事件。

Now that we’ve looked at how Eventuate Local stores an aggregate’s events and snapshots, let’s see how a client subscribes to events using Eventuate Local’s event broker.

通过订阅 Eventuate Local 的事件代理来使用事件

服务通过订阅事件代理来使用事件,事件代理是使用 Apache Kafka 实现的。事件代理具有 每个聚合类型的一个主题。如第 3 章所述,主题是一个分区的消息通道。这使使用者能够在保持消息排序的同时进行水平扩展。聚合 ID 用作分区键,它保留给定聚合发布的事件的顺序。要使用聚合的 events 中,服务会订阅聚合的主题。

Services consume events by subscribing to the event broker, which is implemented using Apache Kafka. The event broker has a topic for each aggregate type. As described in chapter 3, a topic is a partitioned message channel. This enables consumers to scale horizontally while preserving message ordering. The aggregate ID is used as the partition key, which preserves the ordering of events published by a given aggregate. To consume an aggregate’s events, a service subscribes to the aggregate’s topic.

现在让我们看看事件中继 — 事件数据库和事件代理之间的粘合剂。

Let’s now look at the event relay—the glue between the event database and the event broker.

Eventuate Local 事件中继将事件从 databa- ase 传播到 Message Broker

事件中继将插入事件数据库的事件传播到事件代理。它每当使用事务日志拖尾 possible 和轮询其他数据库。例如,事件中继的 MySQL 版本使用 MySQL 主/从复制 协议。事件中继像从服务器一样连接到 MySQL 服务器,并读取 MySQL binlog(更新记录) made 到数据库。与事件对应的表中的插入内容将发布到相应的 Apache Kafka 主题。事件中继会忽略任何其他 类型的更改。EVENTS

The event relay propagates events inserted into the event database to the event broker. It uses transaction log tailing whenever possible and polling for other databases. For example, the MySQL version of the event relay uses the MySQL master/slave replication protocol. The event relay connects to the MySQL server as if it were a slave and reads the MySQL binlog, a record of updates made to the database. Inserts into the EVENTS table, which correspond to events, are published to the appropriate Apache Kafka topic. The event relay ignores any other kinds of changes.

事件中继部署为独立进程。为了正确重启,它会定期保存当前位置 在特殊的 Apache Kafka 主题中的 binlog 中 filename 和 offset。启动时,它首先从主题中检索最后记录的位置。然后,事件中继 从该位置开始读取 MySQL binlog。

The event relay is deployed as a standalone process. In order to restart correctly, it periodically saves the current position in the binlog—filename and offset—in a special Apache Kafka topic. On startup, it first retrieves the last recorded position from the topic. The event relay then starts reading the MySQL binlog from that position.

事件数据库、消息代理和事件中继构成了事件存储。现在让我们看看 Java 应用程序的框架 用于访问事件存储。

The event database, message broker, and event relay comprise the event store. Let’s now look at the framework a Java application uses to access the event store.

6.2.2. 适用于 Java 的 Eventuate 客户端框架

6.2.2. The Eventuate client framework for Java

Eventuate 客户端框架使开发人员能够编写基于 Eventuate Local 的事件溯源应用程序 事件存储。该框架如图 6.10 所示,为开发基于事件溯源的聚合、服务和事件处理程序提供了基础。

The Eventuate client framework enables developers to write event sourcing-based applications that use the Eventuate Local event store. The framework, shown in figure 6.10, provides the foundation for developing event sourcing-based aggregates, services, and event handlers.

图 6.10.Eventuate Java 客户端框架提供的主要类和接口

该框架为聚合、命令和事件提供基类。还有一个提供 CRUD 功能的类。并且该框架有一个用于订阅事件的 API。AggregateRepository

The framework provides base classes for aggregates, commands, and events. There’s also an AggregateRepository class that provides CRUD functionality. And the framework has an API for subscribing to events.

让我们简要地看一下图 6.10 中所示的每种类型。

Let’s briefly look at each of the types shown in figure 6.10.

使用 ReflectiveMutableCommandProcessingAggregate 类定义聚合

ReflectiveMutableCommandProcessingAggregate是聚合的基类。它是一个泛型类,具有两个类型参数:第一个是具体聚合 class,第二个是聚合的 Command classes 的超类。顾名思义,它使用反射将命令和事件分派到适当的方法。命令被调度到 方法,并将 events 添加到方法中。process()apply()

ReflectiveMutableCommandProcessingAggregate is the base class for aggregates. It’s a generic class that has two type parameters: the first is the concrete aggregate class, and the second is the superclass of the aggregate’s command classes. As its rather long name suggests, it uses reflection to dispatch command and events to the appropriate method. Commands are dispatched to a process() method, and events to an apply() method.

您之前看到的类扩展了 .下面的清单显示了该类。OrderReflectiveMutableCommandProcessingAggregateOrder

The Order class you saw earlier extends ReflectiveMutableCommandProcessingAggregate. The following listing shows the Order class.

清单 6.3.类的 Eventuate 版本Order
public class Order extends ReflectiveMutableCommandProcessingAggregate<
      Order, OrderCommand> {

  public List<Event> process(CreateOrderCommand command) { ... }

  public void apply(OrderCreatedEvent event) { ... }

  ...
}
public class Order extends ReflectiveMutableCommandProcessingAggregate<
      Order, OrderCommand> {

  public List<Event> process(CreateOrderCommand command) { ... }

  public void apply(OrderCreatedEvent event) { ... }

  ...
}

传递给的两个类型参数是 和 ,它是 命令的基本接口。ReflectiveMutableCommandProcessingAggregateOrderOrderCommandOrder

The two type parameters passed to ReflectiveMutableCommandProcessingAggregate are Order and OrderCommand, which is the base interface for Order’s commands.

定义聚合命令

聚合的命令类必须扩展特定于聚合的基本接口,而该接口本身必须扩展该接口。例如,聚合的命令 extend :CommandOrderOrderCommand

An aggregate’s command classes must extend an aggregate-specific base interface, which itself must extend the Command interface. For example, the Order aggregate’s commands extend OrderCommand:

public interface OrderCommand extends Command {
}

public class CreateOrderCommand implements OrderCommand { ... }
public interface OrderCommand extends Command {
}

public class CreateOrderCommand implements OrderCommand { ... }

接口扩展 ,命令类扩展 。OrderCommandCommandCreateOrderCommandOrderCommand

The OrderCommand interface extends Command, and the CreateOrderCommand command class extends OrderCommand.

定义域事件

聚合的事件类必须扩展接口,该接口是没有方法的标记接口。定义一个通用的基接口也很有用,该接口扩展了聚合的所有事件类。例如,以下是事件的定义:EventEventOrderCreated

An aggregate’s event classes must extend the Event interface, which is a marker interface with no methods. It’s also useful to define a common base interface, which extends Event for all of an aggregate’s event classes. For example, here’s the definition of the OrderCreated event:

interface OrderEvent extends Event {

}

public class OrderCreated extends OrderEvent { ... }
interface OrderEvent extends Event {

}

public class OrderCreated extends OrderEvent { ... }

事件类 extends ,它是聚合事件类的基接口。该接口扩展了 .OrderCreatedOrderEventOrderOrderEventEvent

The OrderCreated event class extends OrderEvent, which is the base interface for the Order aggregate’s event classes. The OrderEvent interface extends Event.

使用 AggregateRepository 类创建、查找和更新聚合

该框架提供了多种创建、查找和更新聚合的方法。最简单的方法,我在这里描述, 是使用 . 是由聚合类和聚合的基命令类参数化的泛型类。它提供三个 重载方法:AggregateRepositoryAggregateRepository

The framework provides several ways to create, find, and update aggregates. The simplest approach, which I describe here, is to use an AggregateRepository. AggregateRepository is a generic class that’s parameterized by the aggregate class and the aggregate’s base command class. It provides three overloaded methods:

  • save()创建聚合
  • save()Creates an aggregate
  • find()查找聚合
  • find()Finds an aggregate
  • update()更新聚合
  • update()Updates an aggregate

and 方法特别方便,因为它们封装了创建和更新聚合所需的样板代码。 例如,将 command 对象作为参数并执行以下步骤:save ()update()save()

The save () and update() methods are particularly convenient because they encapsulate the boilerplate code required for creating and updating aggregates. For instance, save() takes a command object as a parameter and performs the following steps:

  1. 使用其默认构造函数实例化聚合
  2. Instantiates the aggregate using its default constructor
  3. 调用以处理命令process()
  4. Invokes process() to process the command
  5. 通过调用apply()
  6. Applies the generated events by calling apply()
  7. 将生成的事件保存在事件存储中
  8. Saves the generated events in the event store

方法类似。它有两个参数,一个聚合 ID 和一个命令,并执行以下步骤:update()

The update() method is similar. It has two parameters, an aggregate ID and a command, and performs the following steps:

  1. 从事件存储中检索聚合
  2. Retrieves the aggregate from the event store
  3. 调用以处理命令process()
  4. Invokes process() to process the command
  5. 通过调用apply()
  6. Applies the generated events by calling apply()
  7. 将生成的事件保存在事件存储中
  8. Saves the generated events in the event store

该类主要由服务使用,这些服务创建和更新聚合以响应外部请求。例如, 下面的清单显示了如何使用 an 创建一个 .AggregateRepositoryOrderServiceAggregateRepositoryOrder

The AggregateRepository class is primarily used by services, which create and update aggregates in response to external requests. For example, the following listing shows how OrderService uses an AggregateRepository to create an Order.

清单 6.4.使用OrderServiceAggregateRepository
public class OrderService {
  private AggregateRepository<Order, OrderCommand> orderRepository;

  public OrderService(AggregateRepository<Order, OrderCommand> orderRepository)
  {
    this.orderRepository = orderRepository;
  }

  public EntityWithIdAndVersion<Order> createOrder(OrderDetails orderDetails) {
    return orderRepository.save(new CreateOrder(orderDetails));
  }
}
public class OrderService {
  private AggregateRepository<Order, OrderCommand> orderRepository;

  public OrderService(AggregateRepository<Order, OrderCommand> orderRepository)
  {
    this.orderRepository = orderRepository;
  }

  public EntityWithIdAndVersion<Order> createOrder(OrderDetails orderDetails) {
    return orderRepository.save(new CreateOrder(orderDetails));
  }
}

OrderService注入了 for 。它的方法使用命令调用。AggregateRepositoryOrderscreate()AggregateRepository.save()CreateOrder

OrderService is injected with an AggregateRepository for Orders. Its create() method invokes AggregateRepository.save() with a CreateOrder command.

订阅域事件

Eventuate Client 框架还提供了用于编写事件处理程序的 API。清单 6.5 显示了事件的事件处理程序。注释指定长期订阅的 ID。在订阅者未运行时发布的事件将 在启动时交付。注释将方法标识为事件处理程序。CreditReserved@EventSubscriber@EventHandlerMethodcreditReserved()

The Eventuate Client framework also provides an API for writing event handlers. Listing 6.5 shows an event handler for CreditReserved events. The @EventSubscriber annotation specifies the ID of the durable subscription. Events that are published when the subscriber isn’t running will be delivered when it starts up. The @EventHandlerMethod annotation identifies the creditReserved() method as an event handler.

清单 6.5.的事件处理程序OrderCreatedEvent
@EventSubscriber(id="orderServiceEventHandlers")
public class OrderServiceEventHandlers {

  @EventHandlerMethod
  public void creditReserved(EventHandlerContext<CreditReserved> ctx) {
    CreditReserved event = ctx.getEvent();
    ...
  }
@EventSubscriber(id="orderServiceEventHandlers")
public class OrderServiceEventHandlers {

  @EventHandlerMethod
  public void creditReserved(EventHandlerContext<CreditReserved> ctx) {
    CreditReserved event = ctx.getEvent();
    ...
  }

事件处理程序具有 type 的参数,其中包含事件及其元数据。EventHandlerContext

An event handler has a parameter of type EventHandlerContext, which contains the event and its metadata.

现在我们已经了解了如何使用 Eventuate 客户端框架编写基于事件溯源的业务逻辑,让我们看看 了解如何将基于事件溯源的业务逻辑与 Sagas 结合使用。

Now that we’ve looked at how to write event sourcing-based business logic using the Eventuate client framework, let’s look at how to use event sourcing-based business logic with sagas.

6.3. 结合使用 saga 和事件溯源

6.3. Using sagas and event sourcing together

假设您已经使用事件溯源实现了一个或多个服务。您可能编写过与此类似的服务 如清单 6.4 所示。但是,如果您已经阅读了第 4 章,您就会知道服务通常需要启动和参与 sagas,这是用于维护服务之间数据一致性的本地事务序列。例如,使用 saga 验证 .、 和 参与该 saga。因此,您必须集成基于 saga 和事件溯源的业务逻辑。Order ServiceOrderKitchen ServiceConsumer ServiceAccounting Service

Imagine you’ve implemented one or more services using event sourcing. You’ve probably written services similar to the one shown in listing 6.4. But if you’ve read chapter 4, you know that services often need to initiate and participate in sagas, sequences of local transactions used to maintain data consistency across services. For example, Order Service uses a saga to validate an Order. Kitchen Service, Consumer Service, and Accounting Service participate in that saga. Consequently, you must integrate sagas and event sourcing-based business logic.

事件溯源使使用基于 Choreography 的 Sagas 变得容易。参与者交换其 集 料。每个参与者的聚合通过处理命令和发出新事件来处理事件。您需要编写 聚合和事件处理程序类,用于更新聚合。

Event sourcing makes it easy to use choreography-based sagas. The participants exchange the domain events emitted by their aggregates. Each participant’s aggregates handle events by processing commands and emitting new events. You need to write the aggregates and the event handler classes, which update the aggregates.

但是,将基于事件溯源的业务逻辑与基于编排的 Sagas 集成可能更具挑战性。那是因为 事件存储的事务概念可能非常有限。使用某些事件存储时,应用程序只能创建 或更新单个聚合并发布生成的事件。但是 saga 的每个步骤都包含多个操作,这些操作 必须以原子方式执行:

But integrating event sourcing-based business logic with orchestration-based sagas can be more challenging. That’s because the event store’s concept of a transaction might be quite limited. When using some event stores, an application can only create or update a single aggregate and publish the resulting event(s). But each step of a saga consists of several actions that must be performed atomically:

  • 传奇创作启动 saga 的服务必须以原子方式创建或更新聚合并创建 saga 编排器。例如,的方法必须创建一个聚合和一个 .Order ServicecreateOrder()OrderCreateOrderSaga
  • Saga creationA service that initiates a saga must atomically create or update an aggregate and create the saga orchestrator. For example, Order Service’s createOrder() method must create an Order aggregate and a CreateOrderSaga.
  • Saga 编排saga 编排器必须以原子方式使用回复、更新其状态并发送命令消息。
  • Saga orchestrationA saga orchestrator must atomically consume replies, update its state, and send command messages.
  • Saga 参与者Saga 参与者(如 和 )必须以原子方式使用消息、检测并丢弃重复项、创建或更新聚合以及发送回复消息。Kitchen ServiceOrder Service
  • Saga participantsSaga participants, such as Kitchen Service and Order Service, must atomically consume messages, detect and discard duplicates, create or update aggregates, and send reply messages.

由于这些要求与事件存储的事务功能不匹配,因此集成基于编排的 Sagas 和事件溯源可能会带来一些有趣的挑战。

Because of this mismatch between these requirements and the transactional capabilities of an event store, integrating orchestration-based sagas and event sourcing potentially creates some interesting challenges.

确定集成事件溯源和基于编排的 Sagas 的难易程度的一个关键因素是事件存储 使用 RDBMS 或 NoSQL 数据库。第 4 章中描述的 Eventuate Tram saga 框架和第 3 章中描述的底层 Tram 消息传递框架依赖于 RDBMS 提供的灵活 ACID 事务。saga 业务流程协调程序和 saga 参与者使用 ACID 事务 以原子方式更新他们的数据库并交换消息。如果应用程序使用基于 RDBMS 的事件存储,例如 Eventuate Local,然后它可以欺骗并调用 Eventuate Tram saga 框架,并在 ACID 事务中更新事件存储。但是,如果事件存储 使用 NoSQL 数据库,该数据库不能参与与 Eventuate Tram saga 框架相同的事务,它将具有 以采取不同的方法。

A key factor in determining the ease of integrating event sourcing and orchestration-based sagas is whether the event store uses an RDBMS or a NoSQL database. The Eventuate Tram saga framework described in chapter 4 and the underlying Tram messaging framework described in chapter 3 rely on flexible ACID transactions provided by the RDBMS. The saga orchestrator and the saga participants use ACID transactions to atomically update their databases and exchange messages. If the application uses an RDBMS-based event store, such as Eventuate Local, then it can cheat and invoke the Eventuate Tram saga framework and update the event store within an ACID transaction. But if the event store uses a NoSQL database, which can’t participate in the same transaction as the Eventuate Tram saga framework, it will have to take a different approach.

让我们仔细看看您需要解决的一些不同情况和问题:

Let’s take a closer look at some of the different scenarios and issues you’ll need to address:

  • 实现基于 编排的 Sagas
  • Implementing choreography-based sagas
  • 创建基于编排的 saga
  • Creating an orchestration-based saga
  • 实施基于事件溯源的 saga 参与者
  • Implementing an event sourcing-based saga participant
  • 使用事件溯源实施 saga 编排器
  • Implementing saga orchestrators using event sourcing

首先,我们将了解如何使用事件溯源实现基于 Choreography 的 Sagas。

We’ll begin by looking at how to implement choreography-based sagas using event sourcing.

6.3.1. 使用事件溯源实现基于 编排的 Sagas

6.3.1. Implementing choreography-based sagas using event sourcing

事件溯源的事件驱动性质使得实现基于 Choreography 的 Sagas 变得非常简单。当聚合 更新后,它会发出一个事件。其他聚合的事件处理程序可以使用该事件并更新其聚合。 事件溯源框架会自动使每个事件处理程序具有幂等性。

The event-driven nature of event sourcing makes it quite straightforward to implement choreography-based sagas. When an aggregate is updated, it emits an event. An event handler for a different aggregate can consume that event and update its aggregate. The event sourcing framework automatically makes each event handler idempotent.

例如,第 4 章讨论了如何使用 choreography 实现。、 和 订阅 的事件,反之亦然。每个服务都有一个类似于清单 6.5 中所示的事件处理程序。事件处理程序更新相应的聚合,该聚合将发出另一个事件。Create Order SagaConsumerServiceKitchenServiceAccountingServiceOrderService

For example, chapter 4 discusses how to implement Create Order Saga using choreography. ConsumerService, KitchenService, and AccountingService subscribe to the OrderService’s events and vice versa. Each service has an event handler similar to the one shown in listing 6.5. The event handler updates the corresponding aggregate, which emits another event.

事件溯源和基于编排的 Sagas 可以很好地协同工作。事件溯源提供了 saga 所需的机制, 包括基于消息收发的 IPC、消息重复数据删除以及状态和消息发送的原子更新。尽管它很简单,但基于编舞的 Saga 有几个 缺点。我在第 4 章中讨论了一些缺点,但有一个特定于事件溯源的缺点。

Event sourcing and choreography-based sagas work very well together. Event sourcing provides the mechanisms that sagas need, including messaging-based IPC, message de-duplication, and atomic updating of state and message sending. Despite its simplicity, choreography-based sagas have several drawbacks. I talk about some drawbacks in chapter 4, but there’s a drawback that’s specific to event sourcing.

将事件用于 saga 编排的问题在于,事件现在具有双重用途。事件溯源使用事件来 表示状态更改,但将事件用于 Saga 编排需要一个聚合来发出一个事件,即使没有 状态更改。例如,如果更新聚合会违反业务规则,则聚合必须向 报告错误。更糟糕的问题是 saga 参与者无法创建聚合。没有聚合可以 发出 error 事件。

The problem with using events for saga choreography is that events now have a dual purpose. Event sourcing uses events to represent state changes, but using events for saga choreography requires an aggregate to emit an event even if there is no state change. For example, if updating an aggregate would violate a business rule, then the aggregate must emit an event to report the error. An even worse problem is when a saga participant can’t create an aggregate. There’s no aggregate that can emit an error event.

由于存在此类问题,最好使用编排实现更复杂的 Sagas。以下部分说明 如何集成基于编排的 Sagas 和事件溯源。正如您将看到的,它涉及解决一些有趣的问题。

Because of these kinds of issues, it’s best to implement more complex sagas using orchestration. The following sections explain how to integrate orchestration-based sagas and event sourcing. As you’ll see, it involves solving some interesting problems.

首先,让我们看看服务方法(如 )如何创建 saga 业务流程协调程序。OrderService.createOrder()

Let’s first look at how a service method such as OrderService.createOrder() creates a saga orchestrator.

6.3.2. 创建基于编排的 saga

6.3.2. Creating an orchestration-based saga

Saga 编排器是由一些服务方法创建的。其他服务方法(如 )执行两项操作:创建或更新聚合以及创建 saga orchestrator。该服务必须以某种方式执行这两个操作,以保证在执行第一个操作时, 然后最终将完成第二个操作。该服务如何确保执行这两个操作取决于 它使用的事件存储类型。OrderService.createOrder()

Saga orchestrators are created by some service methods. Other service methods, such as OrderService.createOrder(), do two things: create or update an aggregate and create a saga orchestrator. The service must perform both actions in a way that guarantees that if it does the first action, then the second action will be done eventually. How the service ensures that both of these actions are performed depends on the kind of event store it uses.

在使用基于 RDBMS 的事件存储时创建 saga 编排器

如果服务使用基于 RDBMS 的事件存储,它可以更新事件存储并在同一事件存储中创建 saga 编排器 ACID 事务。例如,假设使用 Eventuate Local 和 Eventuate Tram saga 框架。它的方法将如下所示:OrderServicecreateOrder()

If a service uses an RDBMS-based event store, it can update the event store and create a saga orchestrator within the same ACID transaction. For example, imagine that the OrderService uses Eventuate Local and the Eventuate Tram saga framework. Its createOrder() method would look like this:

class OrderService

  @Autowired
  private SagaManager<CreateOrderSagaState> createOrderSagaManager;

  @Transactional                                                             1
   public EntityWithIdAndVersion<Order> createOrder(OrderDetails orderDetails) {
    EntityWithIdAndVersion<Order> order =
        orderRepository.save(new CreateOrder(orderDetails));                 2

    CreateOrderSagaState data =
        new CreateOrderSagaState(order.getId(), orderDetails);               3

    createOrderSagaManager.create(data, Order.class, order.getId());

    return order;
  }
...
class OrderService

  @Autowired
  private SagaManager<CreateOrderSagaState> createOrderSagaManager;

  @Transactional                                                             1
   public EntityWithIdAndVersion<Order> createOrder(OrderDetails orderDetails) {
    EntityWithIdAndVersion<Order> order =
        orderRepository.save(new CreateOrder(orderDetails));                 2

    CreateOrderSagaState data =
        new CreateOrderSagaState(order.getId(), orderDetails);               3

    createOrderSagaManager.create(data, Order.class, order.getId());

    return order;
  }
...

  • 1 确保 createOrder() 在数据库事务中执行。
  • 1 Ensure the createOrder() executes within a database transaction.
  • 2 创建 Order 聚合。
  • 2 Create the Order aggregate.
  • 3 创建 CreateOrderSaga。
  • 3 Create the CreateOrderSaga.

它是清单 6.4 中的和第 4 章中描述的的组合。由于 Eventuate Local 使用 RDBMS,因此它可以参与与 Eventuate Tram saga 框架相同的 ACID 事务。 但是,如果服务使用基于 NoSQL 的事件存储,则创建 saga 编排器就不那么简单了。OrderServiceOrderService

It’s a combination of the OrderService in listing 6.4 and the OrderService described in chapter 4. Because Eventuate Local uses an RDBMS, it can participate in the same ACID transaction as the Eventuate Tram saga framework. But if a service uses a NoSQL-based event store, creating a saga orchestrator isn’t as straightforward.

使用基于 NoSQL 的事件存储时创建 saga 编排器

使用基于 NoSQL 的事件存储的服务很可能无法以原子方式更新事件存储并创建 Saga 编排器。saga 编排框架可能使用完全不同的数据库。即使它使用相同的 NoSQL 数据库中,由于 NoSQL 数据库的 受限交易模式。相反,服务必须具有一个事件处理程序,用于创建 saga 业务流程协调程序以响应 聚合发出的域事件。

A service that uses a NoSQL-based event store will most likely be unable to atomically update the event store and create a saga orchestrator. The saga orchestration framework might use an entirely different database. Even if it uses the same NoSQL database, the application won’t be able to create or update two different objects atomically because of the NoSQL database’s limited transaction model. Instead, a service must have an event handler that creates the saga orchestrator in response to a domain event emitted by the aggregate.

例如,图 6.11 显示了如何为事件创建一个 using 事件处理程序。 首先创建一个聚合并将其保存在事件存储中。事件存储发布事件,该事件由事件处理程序使用。事件处理程序调用 Eventuate Tram saga 框架来创建 .Order ServiceCreateOrderSagaOrderCreatedOrder ServiceOrderOrderCreatedCreateOrderSaga

For example, figure 6.11 shows how Order Service creates a CreateOrderSaga using an event handler for the OrderCreated event. Order Service first creates an Order aggregate and persists it in the event store. The event store publishes the OrderCreated event, which is consumed by the event handler. The event handler invokes the Eventuate Tram saga framework to create a CreateOrderSaga.

图 6.11.在服务创建基于事件溯源的聚合后,使用事件处理程序可靠地创建 saga

在编写创建 saga 业务流程协调程序的事件处理程序时,要记住的一个问题是它必须处理重复的 事件。至少一次消息传递意味着创建 saga 的事件处理程序可能会被多次调用。 请务必确保只创建一个 saga 实例。

One issue to keep in mind when writing an event handler that creates a saga orchestrator is that it must handle duplicate events. At-least-once message delivery means that the event handler that creates the saga might be invoked multiple times. It’s important to ensure that only one saga instance is created.

一种简单的方法是从事件的唯一属性派生 saga 的 ID。有几种不同的 选项。一种方法是使用发出事件的聚合的 ID 作为 saga 的 ID。这很适用于符合以下条件的 Sagas 创建这些事件是为了响应聚合创建事件。

A straightforward approach is to derive the ID of the saga from a unique attribute of the event. There are a couple of different options. One is to use the ID of the aggregate that emits the event as the ID of the saga. This works well for sagas that are created in response to aggregate creation events.

另一个选项是使用事件 ID 作为 saga ID。由于事件 ID 是唯一的,因此这将保证 saga ID 为 独特。如果事件是重复的,则事件处理程序创建 saga 的尝试将失败,因为 ID 已存在。 当给定聚合实例可以存在同一 saga 的多个实例时,此选项非常有用。

Another option is to use the event ID as the saga ID. Because event IDs are unique, this will guarantee that the saga ID is unique. If an event is a duplicate, the event handler’s attempt to create the saga will fail because the ID already exists. This option is useful when multiple instances of the same saga can exist for a given aggregate instance.

使用基于 RDBMS 的事件存储的服务也可以使用相同的事件驱动方法创建 Sagas。这样做的好处 方法,因为它促进了松散耦合,因为诸如 Sagas 之类的服务不再显式实例化 Sagas。OrderService

A service that uses an RDBMS-based event store can also use the same event-driven approach to create sagas. A benefit of this approach is that it promotes loose coupling because services such as OrderService no longer explicitly instantiate sagas.

现在我们已经了解了如何可靠地创建 saga 编排器,让我们看看基于事件溯源的服务如何参与 在基于编排的 Sagas 中。

Now that we’ve looked at how to reliably create a saga orchestrator, let’s see how event sourcing-based services can participate in orchestration-based sagas.

6.3.3. 实现基于事件溯源的 saga 参与者

6.3.3. Implementing an event sourcing-based saga participant

假设您使用事件溯源来实现需要参与基于编排的 saga 的服务。不 令人惊讶的是,如果您的服务使用基于 RDBMS 的事件存储(如 Eventuate Local),则可以轻松地确保它以原子方式 处理 saga 命令消息并发送回复。它可以在启动的 ACID 事务中更新事件存储 通过 Eventuate Tram 框架。但是,如果您的服务使用的事件存储 不能参与与 Eventuate Tram 框架相同的事务。

Imagine that you used event sourcing to implement a service that needs to participate in an orchestration-based saga. Not surprisingly, if your service uses an RDBMS-based event store such as Eventuate Local, you can easily ensure that it atomically processes saga command messages and sends replies. It can update the event store as part of the ACID transaction initiated by the Eventuate Tram framework. But you must use an entirely different approach if your service uses an event store that can’t participate in the same transaction as the Eventuate Tram framework.

您必须解决几个不同的问题:

You must address a couple of different issues:

  • 幂等命令消息处理
  • Idempotent command message handling
  • 原子方式发送回复消息
  • Atomically sending a reply message

我们首先看一下如何实现幂等命令消息处理程序。

Let’s first look at how to implement idempotent command message handlers.

幂等命令消息处理

首先要解决的问题是基于事件溯源的 saga 参与者如何按顺序检测和丢弃重复消息 实现幂等命令消息处理。幸运的是,这是一个使用幂等消息很容易解决的问题 处理机制。saga 参与者在处理时生成的事件中记录消息 ID 消息。在更新聚合之前,saga 参与者通过查找来验证它之前是否未处理过该消息 以获取事件中的消息 ID。

The first problem to solve is how an event sourcing-based saga participant can detect and discard duplicate messages in order to implement idempotent command message handling. Fortunately, this is an easy problem to address using the idempotent message handling mechanism described earlier. A saga participant records the message ID in the events that are generated when processing the message. Before updating an aggregate, the saga participant verifies that it hasn’t processed the message before by looking for the message ID in the events.

以原子方式发送回复消息

要解决的第二个问题是基于事件溯源的 saga 参与者如何以原子方式发送回复。原则上,一个 saga Orchestrator 可以订阅聚合发出的事件,但此方法存在两个问题。第一个 是 saga 命令实际上可能不会更改聚合的状态。在这种情况下,聚合不会发出 事件,因此不会向 Saga Orchestrator 发送任何回复。第二个问题是此方法需要 saga 编排器 区别对待使用事件溯源的 Saga 参与者和不使用事件溯源的参与者。那是因为为了接收域 事件,则 Saga Orchestrator 除了订阅自己的回复通道外,还必须订阅聚合的事件通道。

The second problem to solve is how an event sourcing-based saga participant can atomically send replies. In principle, a saga orchestrator could subscribe to the events emitted by an aggregate, but there are two problems with this approach. The first is that a saga command might not actually change the state of an aggregate. In this scenario, the aggregate won’t emit an event, so no reply will be sent to the saga orchestrator. The second problem is that this approach requires the saga orchestrator to treat saga participants that use event sourcing differently from those that don’t. That’s because in order to receive domain events, the saga orchestrator must subscribe to the aggregate’s event channel in addition to its own reply channel.

更好的方法是让 saga 参与者继续向 saga 编排器的回复通道发送回复消息。 但是,saga 参与者不是直接发送回复消息,而是使用两步过程:

A better approach is for the saga participant to continue to send a reply message to the saga orchestrator’s reply channel. But rather than send the reply message directly, a saga participant uses a two-step process:

  1. 当 saga 命令处理程序创建或更新聚合时,它会安排将伪事件与聚合发出的实际事件一起保存在事件存储中。SagaReplyRequested
  2. When a saga command handler creates or updates an aggregate, it arranges for a SagaReplyRequested pseudo event to be saved in the event store along with the real events emitted by the aggregate.
  3. 伪事件的事件处理程序使用事件中包含的数据来构造回复消息,然后将其写入 saga 编排器的 回复频道。SagaReplyRequested
  4. An event handler for the SagaReplyRequested pseudo event uses the data contained in the event to construct the reply message, which it then writes to the saga orchestrator’s reply channel.

让我们看一个示例,看看它是如何工作的。

Let’s look at an example to see how this works.

基于事件溯源的 saga 参与者示例

此示例查看 ,是 的参与者之一。图 6.12 显示了如何处理 saga 发送的 saga。 使用 Eventuate Saga 框架实现。Eventuate Saga 框架是一个用于编写 Sagas 的开源框架 使用事件溯源。它基于 Eventuate Client 框架构建。Accounting ServiceCreate Order SagaAccounting ServiceAuthorize CommandAccounting Service

This example looks at Accounting Service, one of the participants of Create Order Saga. Figure 6.12 shows how Accounting Service handles the Authorize Command sent by the saga. Accounting Service is implemented using the Eventuate Saga framework. The Eventuate Saga framework is an open source framework for writing sagas that use event sourcing. It’s built on the Eventuate Client framework.

图 6.12.基于事件溯源的 the participateAccounting ServiceCreate Order Saga

此图显示了如何交互。事件顺序如下:Create Order SagaAccountingService

This figure shows how Create Order Saga and AccountingService interact. The sequence of events is as follows:

  1. Create Order Saga通过消息传递通道发送命令。Eventuate Saga 框架的调用来处理命令消息。AuthorizeAccountAccountingServiceSagaCommandDispatcherAccountingServiceCommandHandler
  2. Create Order Saga sends an AuthorizeAccount command to AccountingService via a messaging channel. The Eventuate Saga framework’s SagaCommandDispatcher invokes AccountingServiceCommandHandler to handle the command message.
  3. AccountingServiceCommandHandler将命令发送到指定的聚合。Account
  4. AccountingServiceCommandHandler sends the command to the specified Account aggregate.
  5. 聚合发出两个事件,和 .AccountAuthorizedSagaReplyRequestedEvent
  6. The aggregate emits two events, AccountAuthorized and SagaReplyRequestedEvent.
  7. SagaReplyRequestedEventHandler通过向 . 发送回复消息来处理。SagaReplyRequestedEventCreateOrderSaga
  8. SagaReplyRequestedEventHandler handles SagaReplyRequestedEvent by sending a reply message to CreateOrderSaga.

以下清单中所示通过调用更新聚合来处理命令消息。AccountingServiceCommandHandlerAuthorizeAccountAggregateRepository.update()Account

The AccountingServiceCommandHandler shown in the following listing handles the AuthorizeAccount command message by calling AggregateRepository.update() to update the Account aggregate.

清单 6.6.处理 saga 发送的命令消息
public class AccountingServiceCommandHandler {

  @Autowired
  private AggregateRepository<Account, AccountCommand> accountRepository;

  public void authorize(CommandMessage<AuthorizeCommand> cm) {
    AuthorizeCommand command = cm.getCommand();
    accountRepository.update(command.getOrderId(),
            command,
            replyingTo(cm)
                .catching(AccountDisabledException.class,
                          () -> withFailure(new AccountDisabledReply()))
                .build());
  }

  ...
public class AccountingServiceCommandHandler {

  @Autowired
  private AggregateRepository<Account, AccountCommand> accountRepository;

  public void authorize(CommandMessage<AuthorizeCommand> cm) {
    AuthorizeCommand command = cm.getCommand();
    accountRepository.update(command.getOrderId(),
            command,
            replyingTo(cm)
                .catching(AccountDisabledException.class,
                          () -> withFailure(new AccountDisabledReply()))
                .build());
  }

  ...

该方法调用 an 来更新聚合。的第三个参数 ,即 ,由以下表达式计算:authorize()AggregateRepositoryAccountupdate()UpdateOptions

The authorize() method invokes an AggregateRepository to update the Account aggregate. The third argument to update(), which is the UpdateOptions, is computed by this expression:

replyingTo(cm)
    .catching(AccountDisabledException.class,
              () -> withFailure(new AccountDisabledReply()))
    .build()
replyingTo(cm)
    .catching(AccountDisabledException.class,
              () -> withFailure(new AccountDisabledReply()))
    .build()

这些配置方法以执行以下操作:UpdateOptionsupdate()

These UpdateOptions configure the update() method to do the following:

  1. 使用消息 ID 作为幂等键,以确保消息只处理一次。如前所述,Eventuate 框架 将幂等密钥存储在所有生成的事件中,使其能够检测和忽略更新聚合的重复尝试。
  2. Use the message id as an idempotency key to ensure that the message is processed exactly once. As mentioned earlier, the Eventuate framework stores the idempotency key in all generated events, enabling it to detect and ignore duplicate attempts to update an aggregate.
  3. 将伪事件添加到事件存储中保存的事件列表中。当收到伪事件时,它会向 的回复通道发送回复。SagaReplyRequestedEventSagaReplyRequestedEventHandlerSagaReplyRequestedEventCreateOrderSaga
  4. Add a SagaReplyRequestedEvent pseudo event to the list of events saved in the event store. When SagaReplyRequestedEventHandler receives the SagaReplyRequestedEvent pseudo event, it sends a reply to the CreateOrderSaga’s reply channel.
  5. 当聚合引发 .AccountDisabledReplyAccountDisabledException
  6. Send an AccountDisabledReply instead of the default error reply when the aggregate throws an AccountDisabledException.

现在我们已经了解了如何使用事件溯源实现 saga 参与者,让我们看看如何实现 saga 编排器。

Now that we’ve looked at how to implement saga participants using event sourcing, let’s find out how to implement saga orchestrators.

6.3.4. 使用事件溯源实现 saga 编排器

6.3.4. Implementing saga orchestrators using event sourcing

到目前为止,在本节中,我已经介绍了基于事件溯源的服务如何启动和参与 Sagas。您还可以 使用事件溯源实施 Saga 编排器。这将使您能够开发完全基于 事件存储。

So far in this section, I’ve described how event sourcing-based services can initiate and participate in sagas. You can also use event sourcing to implement saga orchestrators. This will enable you to develop applications that are entirely based on an event store.

在实施 saga 编排器时,您必须解决三个关键设计问题:

There are three key design problems you must solve when implementing a saga orchestrator:

  1. 如何持久保存 saga 编排器?
  2. How can you persist a saga orchestrator?
  3. 如何以原子方式更改业务流程协调程序的状态并发送命令消息?
  4. How can you atomically change the state of the orchestrator and send command messages?
  5. 如何确保 saga 业务流程协调程序只处理一次回复消息?
  6. How can you ensure that a saga orchestrator processes reply messages exactly once?

第 4 章讨论了如何实现基于 RDBMS 的 saga 编排器。让我们看看在使用事件溯源时如何解决这些问题。

Chapter 4 discusses how to implement an RDBMS-based saga orchestrator. Let’s look at how to solve these problems when using event sourcing.

使用事件溯源保留 saga 编排器

saga 编排器的生命周期非常简单。首先,它被创建。然后,它会根据 saga 参与者的回复进行更新。 因此,我们可以使用以下事件来持久化 saga:

A saga orchestrator has a very simple lifecycle. First, it’s created. Then it’s updated in response to replies from saga participants. We can, therefore, persist a saga using the following events:

  • SagaOrchestratorCreated - 已创建 saga 编排器。
  • SagaOrchestratorCreatedThe saga orchestrator has been created.
  • SagaOrchestratorUpdatedsaga 编排器已更新。
  • SagaOrchestratorUpdatedThe saga orchestrator has been updated.

saga 编排器在创建时发出一个事件,在更新时发出一个事件。这些事件包含重新创建 saga 业务流程协调程序的状态所需的数据。 例如,第 4 章中描述的 的事件 ,将包含序列化的(例如,JSON)。SagaOrchestratorCreatedSagaOrchestratorUpdatedCreateOrderSagaCreateOrderSagaState

A saga orchestrator emits a SagaOrchestratorCreated event when it’s created and a SagaOrchestratorUpdated event when it has been updated. These events contain the data necessary to re-create the state of the saga orchestrator. For example, the events for CreateOrderSaga, described in chapter 4, would contain a serialized (for example, JSON) CreateOrderSagaState.

可靠地发送命令消息

另一个关键的设计问题是如何以原子方式更新 saga 的状态并发送命令。如第 4 章所述,基于 Eventuate Tram 的 saga 实现通过更新 orchestrator 并将命令消息插入 table 作为同一事务的一部分。使用基于 RDBMS 的事件存储(如 Eventuate Local)的应用程序可以 使用相同的方法。使用基于 NoSQL 的事件存储的应用程序(例如 Eventuate SaaS)可以使用类似的方法, 尽管交易模型非常有限。message

Another key design issue is how to atomically update the state of the saga and send a command. As described in chapter 4, the Eventuate Tram-based saga implementation does this by updating the orchestrator and inserting the command message into a message table as part of the same transaction. An application that uses an RDBMS-based event store, such as Eventuate Local, can use the same approach. An application that uses a NoSQL-based event store, such as Eventuate SaaS, can use an analogous approach, despite having a very limited transaction model.

诀窍是持久化 ,它表示要发送的命令。然后,事件处理程序订阅每个命令消息并将其发送到相应的通道。图 6.13 显示了其工作原理。SagaCommandEventSagaCommandEvents

The trick is to persist a SagaCommandEvent, which represents a command to send. An event handler then subscribes to SagaCommandEvents and sends each command message to the appropriate channel. Figure 6.13 shows how this works.

图 6.13.基于事件溯源的 saga 编排器如何向 saga 参与者发送命令

saga 业务流程协调程序使用两步过程来发送命令:

The saga orchestrator uses a two-step process to send commands:

  1. saga 业务流程协调程序为它要发送的每个命令发出一个。 包含发送命令所需的所有数据,例如目标通道和 Command 对象。这些事件是 保存在事件存储中。SagaCommandEventSagaCommandEvent
  2. A saga orchestrator emits a SagaCommandEvent for each command that it wants to send. SagaCommandEvent contains all the data needed to send the command, such as the destination channel and the command object. These events are persisted in the event store.
  3. 事件处理程序处理这些消息并将命令消息发送到目标消息通道。SagaCommandEvents
  4. An event handler processes these SagaCommandEvents and sends command messages to the destination message channel.

这种两步方法保证命令至少发送一次。

This two-step approach guarantees that the command will be sent at least once.

由于事件存储提供至少一次传递,因此可能会对同一事件多次调用事件处理程序。 这将导致事件处理程序发送重复的命令消息。不过,幸运的是,saga 参与者可以轻松检测并丢弃重复的命令 使用以下机制。的 ID 保证是唯一的,用作命令消息的 ID。因此,重复的消息将具有相同的 ID。接收 重复的命令消息将使用前面描述的机制丢弃它。SagaCommandEventsSagaCommandEvent

Because the event store provides at-least-once delivery, an event handler might be invoked multiple times with the same event. That will cause the event handler for SagaCommandEvents to send duplicate command messages. Fortunately, though, a saga participant can easily detect and discard duplicate commands using the following mechanism. The ID of SagaCommandEvent, which is guaranteed to be unique, is used as the ID of the command message. As a result, the duplicate messages will have the same ID. A saga participant that receives a duplicate command message will discard it using the mechanism described earlier.

只处理一次回复

saga 编排器还需要检测并丢弃重复的回复消息,这可以使用所描述的机制来完成 早些时候。编排器将回复消息的 ID 存储在处理回复时发出的事件中。然后它可以 轻松确定消息是否重复。

A saga orchestrator also needs to detect and discard duplicate reply messages, which it can do using the mechanism described earlier. The orchestrator stores the reply message’s ID in the events that it emits when processing the reply. It can then easily determine whether a message is a duplicate.

如您所见,事件溯源是实现 Sagas 的良好基础。这是对 event 的其他好处的补充 来源,包括在数据更改时生成本质上可靠的事件、可靠的审计日志记录以及 执行时态查询。不过,事件溯源并不是灵丹妙药。它涉及一个重要的学习曲线。进化 事件架构并不总是简单的。但是,尽管存在这些缺点,事件溯源在微服务中仍可以发挥重要作用 建筑。在下一章中,我们将换个话题,看看如何应对不同的分布式数据管理挑战 在微服务架构中:查询。我将介绍如何实现检索分散在多个 服务业。

As you can see, event sourcing is a good foundation for implementing sagas. This is in addition to the other benefits of event sourcing, including the inherently reliable generation of events whenever data changes, reliable audit logging, and the ability to do temporal queries. Event sourcing isn’t a silver bullet, though. It involves a significant learning curve. Evolving the event schema isn’t always straightforward. But despite these drawbacks, event sourcing has a major role to play in a microservice architecture. In the next chapter, we’ll switch gears and look at how to tackle a different distributed data management challenge in a microservice architecture: queries. I’ll describe how to implement queries that retrieve data scattered across multiple services.

总结

Summary

  • 事件溯源将聚合保留为事件序列。每个事件都表示聚合的创建或 状态更改。应用程序通过重放事件来重新创建聚合的状态。事件溯源保留历史记录 提供准确的审计日志,并可靠地发布域事件。
  • Event sourcing persists an aggregate as a sequence of events. Each event represents either the creation of the aggregate or a state change. An application recreates the state of an aggregate by replaying events. Event sourcing preserves the history of a domain object, provides an accurate audit log, and reliably publishes domain events.
  • 快照通过减少必须重放的事件数来提高性能。
  • Snapshots improve performance by reducing the number of events that must be replayed.
  • 事件存储在事件存储中,该存储是数据库和消息代理的混合体。当服务在事件中保存事件时 store 中,它会将事件传递给订阅者。
  • Events are stored in an event store, a hybrid of a database and a message broker. When a service saves an event in an event store, it delivers the event to subscribers.
  • Eventuate Local 是基于 MySQL 和 Apache Kafka 的开源事件存储。开发人员使用 Eventuate 客户端框架 编写聚合和事件处理程序。
  • Eventuate Local is an open source event store based on MySQL and Apache Kafka. Developers use the Eventuate client framework to write aggregates and event handlers.
  • 使用事件溯源的一个挑战是处理事件的演变。应用程序可能必须处理多个 事件版本。一个好的解决方案是使用向上转换,它会在 它们是从 Event Store 加载的。
  • One challenge with using event sourcing is handling the evolution of events. An application potentially must handle multiple event versions when replaying events. A good solution is to use upcasting, which upgrades events to the latest version when they’re loaded from the event store.
  • 删除事件溯源应用程序中的数据非常棘手。应用程序必须使用加密和假名化等技术 为了遵守欧盟的 GDPR 等法规,该法规要求应用程序擦除个人数据。
  • Deleting data in an event sourcing application is tricky. An application must use techniques such as encryption and pseudonymization in order to comply with regulations like the European Union’s GDPR that requires an application to erase an individual’s data.
  • 事件溯源是实现基于 编排的 Sagas 的一种简单方法。服务具有侦听事件的事件处理程序 由基于事件源的聚合发布。
  • Event sourcing is a simple way to implement choreography-based sagas. Services have event handlers that listen to the events published by event sourcing-based aggregates.
  • 事件溯源是实现 saga 编排器的好方法。因此,您可以编写专门使用 事件存储。
  • Event sourcing is a good way to implement saga orchestrators. As a result, you can write applications that exclusively use an event store.

第 7 章.在微服务架构中实现查询

Chapter 7. Implementing queries in a microservice architecture

本章涵盖

This chapter covers

  • 在微服务架构中查询数据的挑战
  • The challenges of querying data in a microservice architecture
  • 何时以及如何使用 API 组合模式实现查询
  • When and how to implement queries using the API composition pattern
  • 何时以及如何使用命令查询责任分离 (CQRS) 模式实现查询
  • When and how to implement queries using the Command query responsibility segregation (CQRS) pattern

Mary 和她的团队刚刚开始适应使用 Sagas 来保持数据一致性的想法。然后他们 发现事务管理并不是他们在迁移时必须担心的唯一分布式数据相关挑战 将 FTGO 应用程序连接到微服务。他们还必须弄清楚如何实现查询。

Mary and her team were just starting to get comfortable with the idea of using sagas to maintain data consistency. Then they discovered that transaction management wasn’t the only distributed data-related challenge they had to worry about when migrating the FTGO application to microservices. They also had to figure out how to implement queries.

为了支持 UI,FTGO 应用程序实现了各种查询操作。在 现有的整体式应用程序相对简单,因为它只有一个数据库。在大多数情况下,所有 FTGO 开发人员需要做的是编写 SQL SELECT 语句并定义必要的索引。正如 Mary 发现的那样,写作 微服务架构中的查询具有挑战性。查询通常需要检索分散在多个服务拥有的数据库中的数据。但是,您不能使用传统的分布式查询机制,因为 即使技术上可行,它也违反了封装。

In order to support the UI, the FTGO application implements a variety of query operations. Implementing these queries in the existing monolithic application is relatively straightforward, because it has a single database. For the most part, all the FTGO developers needed to do was write SQL SELECT statements and define the necessary indexes. As Mary discovered, writing queries in a microservice architecture is challenging. Queries often need to retrieve data that’s scattered among the databases owned by multiple services. You can’t, however, use a traditional distributed query mechanism, because even if it were technically possible, it violates encapsulation.

例如,考虑第 2 章中描述的 FTGO 应用程序的查询操作。某些查询检索仅由一个服务拥有的数据。例如,该查询返回来自 的数据。但其他 FTGO 查询操作(例如 and)返回多个服务拥有的数据。实现这些查询操作并不那么简单。findConsumerProfile()Consumer ServicefindOrder()findOrderHistory()

Consider, for example, the query operations for the FTGO application described in chapter 2. Some queries retrieve data that’s owned by just one service. The findConsumerProfile() query, for example, returns data from Consumer Service. But other FTGO query operations, such as findOrder() and findOrderHistory(), return data owned by multiple services. Implementing these query operations is not as straightforward.

在微服务架构中实现查询操作有两种不同的模式:

There are two different patterns for implementing query operations in a microservice architecture:

  • API 组合模式这是最简单的方法,应尽可能使用。它的工作原理是使拥有 负责调用服务和合并结果的数据。
  • The API composition patternThis is the simplest approach and should be used whenever possible. It works by making clients of the services that own the data responsible for invoking the services and combining the results.
  • 命令查询责任分离 (CQRS) 模式这比 API 组合模式更强大,但也更复杂。它维护一个或多个视图数据库 其唯一目的是支持查询。
  • The Command query responsibility segregation (CQRS) patternThis is more powerful than the API composition pattern, but it’s also more complex. It maintains one or more view databases whose sole purpose is to support queries.

在讨论了这两种模式之后,我将讨论如何设计 CQRS 视图,然后实现一个示例 视图。让我们首先看一下 API 组合模式。

After discussing these two patterns, I will talk about how to design CQRS views, followed by the implementation of an example view. Let’s start by taking a look at the API composition pattern.

7.1. 使用 API 组合模式进行查询

7.1. Querying using the API composition pattern

FTGO 应用程序实现了许多查询操作。如前所述,某些查询从单个 服务。实现这些查询通常很简单,尽管在本章后面,当我介绍 CQRS 模式时, 您将看到难以实现的单个服务查询的示例。

The FTGO application implements numerous query operations. Some queries, as mentioned earlier, retrieve data from a single service. Implementing these queries is usually straightforward—although later in this chapter, when I cover the CQRS pattern, you’ll see examples of single service queries that are challenging to implement.

还有一些查询可以从多个服务中检索数据。在本节中,我将介绍查询操作,这是一个从多个服务检索数据的查询示例。我解释了以下挑战 在微服务架构中实现此类查询时,通常会出现这种情况。然后,我将描述 API 组合模式 并展示如何使用它来实施查询,例如 .findOrder()findOrder()

There are also queries that retrieve data from multiple services. In this section, I describe the findOrder() query operation, which is an example of a query that retrieves data from multiple services. I explain the challenges that often crop up when implementing this type of query in a microservice architecture. I then describe the API composition pattern and show how you can use it to implement queries such as findOrder().

7.1.1. findOrder() 查询操作

7.1.1. The findOrder() query operation

该操作按订单的主键检索订单。它采用 an 作为参数并返回一个对象,其中包含有关订单的信息。如图 7.1 所示,此操作由实现 Order Status 视图的前端模块(例如移动设备或 Web 应用程序)调用。findOrder()orderIdOrderDetails

The findOrder() operation retrieves an order by its primary key. It takes an orderId as a parameter and returns an OrderDetails object, which contains information about the order. As shown in figure 7.1, this operation is called by a frontend module, such as a mobile device or a web application, that implements the Order Status view.

图 7.1.该操作由 FTGO 前端模块调用,并返回 .findOrder()Order

Order Status (订单状态) 视图显示的信息包括有关订单的基本信息,包括订单状态、付款状态、餐厅视角的订单状态以及配送状态,包括订单的位置和预计送达时间(如果正在运输中)。

The information displayed by the Order Status view includes basic information about the order, including its status, payment status, status of the order from the restaurant’s perspective, and delivery status, including its location and estimated delivery time if in transit.

由于其数据驻留在单个数据库中,因此整体式 FTGO 应用程序可以通过执行 联接各种表的单个 SELECT 语句。相比之下,在基于微服务的 FTGO 应用程序中, 数据分散在以下服务中:

Because its data resides in a single database, the monolithic FTGO application can easily retrieve the order details by executing a single SELECT statement that joins the various tables. In contrast, in the microservices-based version of the FTGO application, the data is scattered around the following services:

  • 订单服务基本订单信息,包括详细信息和状态
  • Order ServiceBasic order information, including the details and status
  • 厨房服务从餐厅的角度来看订单状态以及预计准备取货的时间
  • Kitchen ServiceStatus of the order from the restaurant’s perspective and the estimated time it will be ready for pickup
  • 送货服务订单的送达状态、预计送达信息及其当前位置
  • Delivery ServiceThe order’s delivery status, estimated delivery information, and its current location
  • 会计服务订单的付款状态
  • Accounting ServiceThe order’s payment status

任何需要订单详细信息的客户都必须询问所有这些服务。

Any client that needs the order details must ask all of these services.

7.1.2. API 组合模式概述

7.1.2. Overview of the API composition pattern

实现检索多个服务拥有的数据的查询操作(如 )的一种方法是使用 API 组合模式。此模式通过调用拥有数据的服务并组合结果来实现查询操作。图 7.2 显示了此模式的结构。它有两种类型的参与者:findOrder()

One way to implement query operations, such as findOrder(), that retrieve data owned by multiple services is to use the API composition pattern. This pattern implements a query operation by invoking the services that own the data and combining the results. Figure 7.2 shows the structure of this pattern. It has two types of participants:

  • API 编写器这将通过查询提供程序服务来实现查询操作。
  • An API composerThis implements the query operation by querying the provider services.
  • 提供商服务这是一项拥有查询返回的部分数据的服务。
  • A provider serviceThis is a service that owns some of the data that the query returns.
图 7.2.API 组合模式由一个 API 编辑器和两个或多个提供程序服务组成。API Composer 实现查询 通过查询提供程序并合并结果。

图 7.2 显示了三种提供程序服务。API 编写器通过从提供者服务中检索数据并组合 结果。API 编写器可能是需要数据来呈现网页的客户端,例如 Web 应用程序。或者 它可能是一项服务,例如第 8 章中描述的 API 网关及其前端后端变体,它将查询操作公开为 API 端点。

Figure 7.2 shows three provider services. The API composer implements the query by retrieving data from the provider services and combining the results. An API composer might be a client, such as a web application, that needs the data to render a web page. Alternatively, it might be a service, such as an API gateway and its Backends for frontends variant described in chapter 8, which exposes the query operation as an API endpoint.

模式:API 组合

通过多个服务的 API 查询每个服务并合并结果,实现从多个服务中检索数据的查询。 请参阅 http://microservices.io/patterns/data/api-composition.html

Implement a query that retrieves data from several services by querying each service via its API and combining the results. See http://microservices.io/patterns/data/api-composition.html.

是否可以使用此模式来实现特定的查询操作取决于多个因素,包括数据 进行分区,则拥有数据的服务公开的 API 的功能以及数据库的功能 由服务使用。例如,即使 Provider 服务具有用于检索所需数据的 API,聚合器也可能需要执行大型数据集的低效内存中联接。稍后,您将看到 无法使用此模式实现的查询操作示例。不过,幸运的是,在许多情况下,这种情况 此模式适用。为了了解它的实际效果,我们将看一个示例。

Whether you can use this pattern to implement a particular query operation depends on several factors, including how the data is partitioned, the capabilities of the APIs exposed by the services that own the data, and the capabilities of the databases used by the services. For instance, even if the Provider services have APIs for retrieving the required data, the aggregator might need to perform an inefficient, in-memory join of large datasets. Later on, you’ll see examples of query operations that can’t be implemented using this pattern. Fortunately, though, there are many scenarios where this pattern is applicable. To see it in action, we’ll look at an example.

7.1.3. 使用 API 组合模式实现 findOrder() 查询操作

7.1.3. Implementing the findOrder() query operation using the API composition pattern

查询操作对应于一个简单的基于主键的 equijoin 查询。可以合理地预期,每个 Provider 服务都有一个 API 端点,用于通过以下方式检索所需的数据。因此,查询操作非常适合由 API 组合模式实现。API 编辑器调用这四个服务并将结果组合在一起。图 7.3 显示了 的设计。findOrder()orderIdfindOrder()Find Order Composer

The findOrder() query operation corresponds to a simple primary key-based equijoin query. It’s reasonable to expect that each of the Provider services has an API endpoint for retrieving the required data by orderId. Consequently, the findOrder() query operation is an excellent candidate to be implemented by the API composition pattern. The API composer invokes the four services and combines the results together. Figure 7.3 shows the design of the Find Order Composer.

图 7.3.使用 API 组合模式实现findOrder()

在此示例中,API Composer 是将查询作为 REST 端点公开的服务。Provider 服务还实现 REST API。但是,如果服务使用其他进程间通信协议,则概念是相同的。 例如 gRPC,而不是 HTTP。实现 REST 端点 。它调用这四个服务并使用 .每个 Provider 服务都实现一个 REST 终端节点,该终端节点返回与单个聚合对应的响应。按 主键检索其版本,其他服务将 用作外键来检索其聚合。Find Order ComposerGET /order/{orderId}orderIdOrderServiceOrderorderId

In this example, the API composer is a service that exposes the query as a REST endpoint. The Provider services also implement REST APIs. But the concept is the same if the services used some other interprocess communication protocol, such as gRPC, instead of HTTP. The Find Order Composer implements a REST endpoint GET /order/{orderId}. It invokes the four services and joins the responses using the orderId. Each Provider service implements a REST endpoint that returns a response corresponding to a single aggregate. The OrderService retrieves its version of an Order by primary key and the other services use the orderId as a foreign key to retrieve their aggregates.

如您所见,API 组合模式非常简单。让我们看看在以下情况下必须解决的几个设计问题 应用此模式。

As you can see, the API composition pattern is quite simple. Let’s look at a couple of design issues you must address when applying this pattern.

7.1.4. API 组合设计问题

7.1.4. API composition design issues

使用此模式时,您必须解决几个设计问题:

When using this pattern, you have to address a couple of design issues:

  • 确定体系结构中的哪个组件是查询操作的 API 编辑器
  • Deciding which component in your architecture is the query operation’s API composer
  • 如何编写高效的聚合逻辑
  • How to write efficient aggregation logic

让我们看看每个问题。

Let’s look at each issue.

谁扮演 API 编写者的角色?

您必须做出的一个决定是由谁扮演查询操作的 API 编辑器的角色。您有三个选择。第一个选项(如图 7.4 所示)是让服务的客户端成为 API 编写器

One decision that you must make is who plays the role of the query operation’s API composer. You have three options. The first option, shown in figure 7.4, is for a client of the services to be the API composer.

图 7.4.在客户端中实现 API 组合。客户端查询提供商服务以检索数据。

实现视图并在同一 LAN 上运行的前端客户端(如 Web 应用程序)可以使用此模式高效地检索订单详细信息。但正如您将了解到的那样 在第 8 章中,此选项对于位于防火墙之外并通过较慢网络访问服务的客户端可能不切实际。Order Status

A frontend client such as a web application, that implements the Order Status view and is running on the same LAN, could efficiently retrieve the order details using this pattern. But as you’ll learn in chapter 8, this option is probably not practical for clients that are outside of the firewall and access services via a slower network.

第二个选项(如图 7.5 所示)用于 API 网关(实现应用程序的外部 API),以扮演查询操作的 API 编辑器的角色。

The second option, shown in figure 7.5, is for an API gateway, which implements the application’s external API, to play the role of an API composer for a query operation.

图 7.5.在 API 网关中实施 API 组合。API 查询提供程序服务以检索数据,将 结果,并将响应返回给客户端。

如果查询操作是应用程序外部 API 的一部分,则此选项有意义。而不是将请求路由到 另一个服务,API 网关实现 API 组合逻辑。此方法使客户端(如移动设备)能够 在防火墙之外运行,通过单个 API 调用从众多服务中高效检索数据。我讨论 第 8 章中的 API 网关。

This option makes sense if the query operation is part of the application’s external API. Instead of routing a request to another service, the API gateway implements the API composition logic. This approach enables a client, such as a mobile device, that’s running outside of the firewall to efficiently retrieve data from numerous services with a single API call. I discuss the API gateway in chapter 8.

第三种选择(如图 7.6 所示)是将 API 编写器实现为独立服务。

The third option, shown in figure 7.6, is to implement an API composer as a standalone service.

图 7.6.将多个客户端和服务使用的查询操作作为独立服务实现。

您应该将此选项用于多个服务内部使用的查询操作。此操作也可以使用 用于外部可访问的查询操作,其聚合逻辑过于复杂,无法成为 API 网关的一部分。

You should use this option for a query operation that’s used internally by multiple services. This operation can also be used for externally accessible query operations whose aggregation logic is too complex to be part of an API gateway.

API 编写者应该使用反应式编程模型

在开发分布式系统时,最大限度地减少延迟是一个始终存在的问题。API 编写器应尽可能并行调用提供程序服务,以最大程度地缩短查询操作的响应时间。例如,应该同时调用这四个服务,因为调用之间没有依赖关系。有时 但是,API 编写器需要一个 Provider 服务的结果才能调用另一个服务。在这种情况下,它将需要按顺序调用一些 (但希望不是全部) 提供程序服务Find Order Aggregator

When developing a distributed system, minimizing latency is an ever-present concern. Whenever possible, an API composer should call provider services in parallel in order to minimize the response time for a query operation. The Find Order Aggregator should, for example, invoke the four services concurrently because there are no dependencies between the calls. Sometimes, though, an API composer needs the result of one Provider service in order to invoke another service. In this case, it will need to invoke some—but hopefully not all—of the provider services sequentially.

有效执行顺序和并行服务调用的逻辑可能很复杂。为了使 API 编写器既可维护又高性能和可扩展,它应该使用基于 Java、RxJava 可观察对象或其他一些等效抽象的反应式设计。在第 8 章中,我将进一步讨论这个主题,当时我将介绍 API 网关模式。CompletableFuture

The logic to efficiently execute a mixture of sequential and parallel service invocations can be complex. In order for an API composer to be maintainable as well as performant and scalable, it should use a reactive design based on Java CompletableFuture’s, RxJava observables, or some other equivalent abstraction. I discuss this topic further in chapter 8 when I cover the API gateway pattern.

7.1.5. API 组合模式的优缺点

7.1.5. The benefits and drawbacks of the API composition pattern

此模式是在微服务架构中实现查询操作的一种简单直观的方法。但它也有一些缺点:

This pattern is a simple and intuitive way to implement query operations in a microservice architecture. But it has some drawbacks:

  • 开销增加
  • Increased overhead
  • 可用性降低的风险
  • Risk of reduced availability
  • 缺乏事务数据一致性
  • Lack of transactional data consistency

让我们来看看它们。

Let’s take a look at them.

开销增加

此模式的一个缺点是调用多个服务和查询多个数据库的开销。在整体式 应用程序,客户端可以通过单个请求检索数据,该请求通常会执行单个数据库查询。相比之下, 使用 API 组合模式涉及多个请求和数据库查询。因此,更多的计算和网络 需要资源,从而增加运行应用程序的成本。

One drawback of this pattern is the overhead of invoking multiple services and querying multiple databases. In a monolithic application, a client can retrieve data with a single request, which will often execute a single database query. In comparison, using the API composition pattern involves multiple requests and database queries. As a result, more computing and network resources are required, increasing the cost of running the application.

可用性降低的风险

此模式的另一个缺点是可用性降低。如第 3 章所述,操作的可用性会随着所涉及的服务数量的增加而下降。由于 查询操作至少涉及三个服务(API 编辑器和至少两个提供商服务),其可用性将明显低于单个服务的可用性。例如 如果单个服务的可用性为 99.5%,则调用四个提供者服务的端点的可用性为 99.5%findOrder()(4+1)= 97.5%!

Another drawback of this pattern is reduced availability. As described in chapter 3, the availability of an operation declines with the number of services that are involved. Because the implementation of a query operation involves at least three services—the API composer and at least two provider services—its availability will be significantly less than that of a single service. For example, if the availability of an individual service is 99.5%, then the availability of the findOrder() endpoint, which invokes four provider services, is 99.5%(4+1) = 97.5%!

您可以使用几种策略来提高可用性。第一种策略是让 API 编写器Provider 服务不可用时返回以前缓存的数据。API 编写器有时会缓存 Provider 服务返回的数据以提高性能。它还可以使用此缓存来提高可用性。如果提供程序不可用,则 API 编辑器可以从缓存中返回数据,尽管它可能已过时。

There are couple of strategies you can use to improve availability. The first strategy is for the API composer to return previously cached data when a Provider service is unavailable. An API composer sometimes caches the data returned by a Provider service in order to improve performance. It can also use this cache to improve availability. If a provider is unavailable, the API composer can return data from the cache, though it may be potentially stale.

提高可用性的另一种策略是让 API 编辑器返回不完整的数据。例如,假设它暂时不可用。查询操作的 API Composer 可以在响应中省略该服务的数据,因为 UI 仍然可以显示有用的信息。您将 请参阅第 8 章中有关 API 设计、缓存和可靠性的更多详细信息。Kitchen ServicefindOrder()

Another strategy for improving availability is for the API composer to return incomplete data. For example, imagine that Kitchen Service is temporarily unavailable. The API Composer for the findOrder() query operation could omit that service’s data from the response, because the UI can still display useful information. You’ll see more details on API design, caching, and reliability in chapter 8.

缺乏事务数据一致性

API 组合模式的另一个缺点是缺乏数据一致性。整体式应用程序通常执行 使用单个数据库事务的查询操作。ACID 事务 — 根据有关隔离级别的细则 — 确保 应用程序具有一致的数据视图,即使它执行多个数据库查询也是如此。相比之下,API 组合模式对多个数据库执行多个数据库查询。因此,存在一个风险,即查询 操作将返回不一致的数据。

Another drawback of the API composition pattern is the lack of data consistency. A monolithic application typically executes a query operation using a single database transaction. ACID transactions—subject to the fine print about isolation levels—ensure that an application has a consistent view of the data, even if it executes multiple database queries. In contrast, the API composition pattern executes multiple database queries against multiple databases. There’s a risk, therefore, that a query operation will return inconsistent data.

例如,检索自可能处于状态,而相应的检索自可能尚未取消。API Composer 必须解决此差异,这会增加代码复杂性。更糟糕的是,API 编写器可能并不总是能够检测到不一致的数据,并将其返回给客户端。OrderOrder ServiceCANCELLEDTicketKitchen Service

For example, an Order retrieved from Order Service might be in the CANCELLED state, whereas the corresponding Ticket retrieved from Kitchen Service might not yet have been cancelled. The API composer must resolve this discrepancy, which increases the code complexity. To make matters worse, an API composer might not always be able to detect inconsistent data, and will return it to the client.

尽管存在这些缺点,但 API 组合模式非常有用。您可以使用它来实施许多查询操作。 但是,使用此模式无法有效实现某些查询操作。查询操作可能为 示例,要求 API 编辑器执行大型数据集的内存中联接。

Despite these drawbacks, the API composition pattern is extremely useful. You can use it to implement many query operations. But there are some query operations that can’t be efficiently implemented using this pattern. A query operation might, for example, require the API composer to perform an in-memory join of large datasets.

通常,最好使用 CQRS 模式实现这些类型的查询操作。我们来看看这个模式是怎样的 工程。

It’s usually better to implement these types of query operations using the CQRS pattern. Let’s take a look at how this pattern works.

7.2. 使用 CQRS 模式

7.2. Using the CQRS pattern

许多企业应用程序使用 RDBMS 作为事务记录系统和文本搜索数据库,例如 Elasticsearch 或 Solr,用于文本搜索查询。某些应用程序通过同时写入数据库来保持数据库同步。别人 定期将数据从 RDBMS 复制到文本搜索引擎。具有此体系结构的应用程序利用了 多个数据库的:RDBMS 的事务属性和文本数据库的查询功能。

Many enterprise applications use an RDBMS as the transactional system of record and a text search database, such as Elasticsearch or Solr, for text search queries. Some applications keep the databases synchronized by writing to both simultaneously. Others periodically copy data from the RDBMS to the text search engine. Applications with this architecture leverage the strengths of multiple databases: the transactional properties of the RDBMS and the querying capabilities of the text database.

模式:命令查询责任分离

通过使用事件来维护复制数据的只读视图,实现需要来自多个服务的数据的查询 从服务。请参阅 http://microservices.io/patterns/data/cqrs.html

Implement a query that needs data from several services by using events to maintain a read-only view that replicates data from the services. See http://microservices.io/patterns/data/cqrs.html.

CQRS 是这种体系结构的泛化。它维护一个或多个视图数据库 — 而不仅仅是文本搜索数据库 — 这些数据库 实现应用程序的一个或多个查询。为了理解为什么这很有用,我们将查看一些不能 使用 API 组合模式高效实现。我将解释 CQRS 的工作原理,然后讨论其优势 以及 CQRS 的缺点。我们来看看何时需要使用 CQRS。

CQRS is a generalization of this kind of architecture. It maintains one or more view databases—not just text search databases—that implement one or more of the application’s queries. To understand why this is useful, we’ll look at some queries that can’t be efficiently implemented using the API composition pattern. I’ll explain how CQRS works and then talk about the benefits and drawbacks of CQRS. Let’s take a look at when you need to use CQRS.

7.2.1. 使用 CQRS 的动机

7.2.1. Motivations for using CQRS

API 组合模式是实现许多必须从多个服务检索数据的查询的好方法。不幸 它只是微服务架构中查询问题的部分解决方案。那是因为有多个 service 查询无法有效实现的 API 组合模式。

The API composition pattern is a good way to implement many queries that must retrieve data from multiple services. Unfortunately, it’s only a partial solution to the problem of querying in a microservice architecture. That’s because there are multiple service queries the API composition pattern can’t implement efficiently.

此外,还有一些难以实现的单个服务查询。也许该服务的数据库没有 有效地支持查询。或者,有时服务实现检索数据的查询是有意义的 由其他服务拥有。我们来看看这些问题,从一个无法高效 使用 API 组合实现。

What’s more, there are also single service queries that are challenging to implement. Perhaps the service’s database doesn’t efficiently support the query. Alternatively, it sometimes makes sense for a service to implement a query that retrieves data owned by a different service. Let’s take a look at these problems, starting with a multi-service query that can’t be efficiently implemented using API composition.

实现 findOrderHistory() 查询操作

该操作检索使用者的订单历史记录。它有几个参数:findOrderHistory()

The findOrderHistory() operation retrieves a consumer’s order history. It has several parameters:

  • consumerId识别使用者
  • consumerIdIdentifies the consumer
  • 分页 - 要返回的结果页
  • paginationPage of results to return
  • filter- 筛选条件,包括要返回的订单的最长期限、可选的订单状态和匹配的可选关键字 餐厅名称和菜单项
  • filterFilter criteria, including the max age of the orders to return, an optional order status, and optional keywords that match the restaurant name and menu items

此查询操作返回一个对象,该对象包含按递增时间排序的匹配订单的摘要。它由实现 视图。此视图显示每个订单的摘要,其中包括订单编号、订单状态、订单总额和估计值 交货时间。OrderHistoryOrder History

This query operation returns an OrderHistory object that contains a summary of the matching orders sorted by increasing age. It’s called by the module that implements the Order History view. This view displays a summary of each order, which includes the order number, order status, order total, and estimated delivery time.

从表面上看,该操作类似于 query 操作。唯一的区别是它返回多个订单,而不仅仅是一个。API 编辑器似乎只需要对每个 Provider 服务执行相同的查询并合并结果。不幸的是,事情并没有那么简单。findOrder()

On the surface, this operation is similar to the findOrder() query operation. The only difference is that it returns multiple orders instead of just one. It may appear that the API composer only has to execute the same query against each Provider service and combine the results. Unfortunately, it’s not that simple.

这是因为并非所有服务都存储用于筛选或排序的属性。例如,操作的筛选条件之一是与菜单项匹配的关键字。只有 和 两个服务 存储 的菜单项。既不存储菜单项,也未存储菜单项,因此无法使用此关键字筛选其数据。同样,两者都不能按属性排序。findOrderHistory()Order ServiceKitchen ServiceOrderDelivery ServiceAccounting ServiceKitchen ServiceDelivery ServiceorderCreationDate

That’s because not all services store the attributes that are used for filtering or sorting. For example, one of the findOrderHistory() operation’s filter criteria is a keyword that matches against a menu item. Only two of the services, Order Service and Kitchen Service, store an Order’s menu items. Neither Delivery Service nor Accounting Service stores the menu items, so can’t filter their data using this keyword. Similarly, neither Kitchen Service nor Delivery Service can sort by the orderCreationDate attribute.

API 编写器有两种方法可以解决此问题。一种解决方案是让 API 编写器执行内存中连接,如图 7.7 所示。它从 和 中检索使用者的所有订单,并对从 和 中检索到的订单执行联接。Delivery ServiceAccounting ServiceOrder ServiceKitchen Service

There are two ways an API composer could solve this problem. One solution is for the API composer to do an in-memory join, as shown in figure 7.7. It retrieves all orders for the consumer from Delivery Service and Accounting Service and performs a join with the orders retrieved from Order Service and Kitchen Service.

图 7.7.API 组合无法有效地检索使用者的订单,因为某些提供程序(如 )不存储用于筛选的属性。Delivery Service

这种方法的缺点是它可能需要 API 编辑器来检索和联接大型数据集,效率低下。

The drawback of this approach is that it potentially requires the API composer to retrieve and join large datasets, which is inefficient.

另一种解决方案是让 API 编写器从其他服务检索匹配的订单,然后按 ID 从其他服务请求订单。但这只有在这些服务具有批量获取 API 时才实用。 由于网络流量过大,单独请求订单可能会效率低下。Order ServiceKitchen Service

The other solution is for the API composer to retrieve matching orders from Order Service and Kitchen Service and then request orders from the other services by ID. But this is only practical if those services have a bulk fetch API. Requesting orders individually will likely be inefficient because of excessive network traffic.

诸如此类的查询要求 API 编辑器复制 RDBMS 的查询执行引擎的功能。一方面,这可能会将工作从 scalable database 部署到更具可伸缩性的应用程序。另一方面,它的效率较低。此外,开发人员应该编写 业务功能,而不是查询执行引擎。findOrderHistory()

Queries such as findOrderHistory() require the API composer to duplicate the functionality of an RDBMS’s query execution engine. On one hand, this potentially moves work from the less scalable database to the more scalable application. On the other hand, it’s less efficient. Also, developers should be writing business functionality, not a query execution engine.

接下来,我将向您展示如何应用 CQRS 模式并使用单独的数据存储,该数据存储旨在有效地实现查询操作。但首先,让我们看一个难以实现的查询操作示例,尽管该操作是单个 服务。findOrderHistory()

Next I show you how to apply the CQRS pattern and use a separate datastore, which is designed to efficiently implement the findOrderHistory() query operation. But first, let’s look at an example of a query operation that’s challenging to implement, despite being local to a single service.

具有挑战性的单个服务查询:findAvailableRestaurants()

正如您刚才所看到的,实现从多个服务检索数据的查询可能具有挑战性。但是,即使是查询 是单个服务的本地服务可能难以实现。出现这种情况的原因有几个。一 是因为,正如稍后讨论的那样,有时拥有数据的服务不适合实现查询。 另一个原因是,有时服务的数据库(或数据模型)不能有效地支持查询。

As you’ve just seen, implementing queries that retrieve data from multiple services can be challenging. But even queries that are local to a single service can be difficult to implement. There are a couple of reasons why this might be the case. One is because, as discussed shortly, sometimes it’s not appropriate for the service that owns the data to implement the query. The other reason is that sometimes a service’s database (or data model) doesn’t efficiently support the query.

例如,考虑 query 操作。此查询查找在给定时间可送货到给定地址的餐馆。心脏 此查询的搜索是地理空间 (基于位置) 搜索,搜索距离送货地址一定距离内的餐馆。 它是订单流程的关键部分,由显示可用餐厅的 UI 模块调用。findAvailableRestaurants()

Consider, for example, the findAvailableRestaurants() query operation. This query finds the restaurants that are available to deliver to a given address at a given time. The heart of this query is a geospatial (location-based) search for restaurants that are within a certain distance of the delivery address. It’s a critical part of the order process and is invoked by the UI module that displays the available restaurants.

实施此查询操作时的主要挑战是执行高效的地理空间查询。如何实现查询取决于存储餐馆的数据库的功能。例如,它很容易实现 使用 MongoDB 或 Postgres 和 MySQL 地理空间扩展的查询。这些数据库支持地理空间数据类型、 索引和查询。当使用其中一个数据库时,将 a 持久保存为具有属性的数据库记录。它使用地理空间查询查找可用的餐厅,该查询由属性的地理空间索引优化。findAvailableRestaurants()findAvailableRestaurants()Restaurant ServiceRestaurantlocationlocation

The key challenge when implementing this query operation is performing an efficient geospatial query. How you implement the findAvailableRestaurants() query depends on the capabilities of the database that stores the restaurants. For example, it’s straightforward to implement the findAvailableRestaurants() query using either MongoDB or the Postgres and MySQL geospatial extensions. These databases support geospatial datatypes, indexes, and queries. When using one of these databases, Restaurant Service persists a Restaurant as a database record that has a location attribute. It finds the available restaurants using a geospatial query that’s optimized by a geospatial index on the location attribute.

如果 FTGO 应用程序将餐馆存储在某种其他类型的数据库中,则实现查询更具挑战性。它必须以旨在支持地理空间的形式维护餐厅数据的副本 查询。例如,该应用程序可以使用使用 DynamoDB 的地理空间索引库 (https://github.com/awslabs/dynamodb-geo),该库使用表作为地理空间索引。或者,应用程序可以将餐厅数据的副本存储在 一种完全不同类型的数据库,这种情况与使用 Text Search 数据库进行文本查询非常相似。findAvailableRestaurant()

If the FTGO application stores restaurants in some other kind of database, implementing the findAvailableRestaurant() query is more challenging. It must maintain a replica of the restaurant data in a form that’s designed to support the geospatial query. The application could, for example, use the Geospatial Indexing Library for DynamoDB (https://github.com/awslabs/dynamodb-geo) that uses a table as a geospatial index. Alternatively, the application could store a replica of the restaurant data in an entirely different type of database, a situation very similar to using a text search database for text queries.

使用副本的挑战在于,每当原始数据发生变化时,它们都保持最新状态。正如您将在下面学习的 CQRS 解决了同步副本的问题。

The challenge with using replicas is keeping them up-to-date whenever the original data changes. As you’ll learn below, CQRS solves the problem of synchronizing replicas.

需要分离关注点

单个服务查询难以实现的另一个原因是,有时拥有数据的服务不应该 是实现查询的对象。查询操作检索 拥有的数据。此服务使餐厅所有者能够管理其餐厅的资料和菜单项。它存储各种属性 餐厅的名称、地址、美食、菜单和营业时间。鉴于此服务拥有数据,因此至少从表面上看,它实现此查询操作是有意义的。但数据所有权并不是唯一的 因素。findAvailableRestaurants()Restaurant Service

Another reason why single service queries are challenging to implement is that sometimes the service that owns the data shouldn’t be the one that implements the query. The findAvailableRestaurants() query operation retrieves data that is owned by Restaurant Service. This service enables restaurant owners to manage their restaurant’s profile and menu items. It stores various attributes of a restaurant, including its name, address, cuisines, menu, and opening hours. Given that this service owns the data, it makes sense, at least on the surface, for it to implement this query operation. But data ownership isn’t the only factor to consider.

您还必须考虑到需要分离关注点并避免因过多责任而使服务超载。 例如,开发团队的主要职责是使餐厅经理能够维护他们的餐厅。这与实现高容量、关键的 查询。更重要的是,如果他们负责查询操作,团队将始终生活在恐惧中,担心部署阻止消费者下订单的更改。Restaurant ServicefindAvailableRestaurants()

You must also take into account the need to separate concerns and avoid overloading services with too many responsibilities. For example, the primary responsibility of the team that develops Restaurant Service is enabling restaurant managers to maintain their restaurants. That’s quite different from implementing a high-volume, critical query. What’s more, if they were responsible for the findAvailableRestaurants() query operation, the team would constantly live in fear of deploying a change that prevented consumers from placing orders.

仅将餐厅数据提供给另一个实现查询操作且很可能由团队拥有的服务是有意义的。与查询操作一样,当需要维护地理空间索引时,需要保持最终一致性 一些数据的副本,以便实现查询。让我们看看如何使用 CQRS 实现这一点。Restaurant ServicefindAvailableRestaurants()Order ServicefindOrderHistory()

It makes sense for Restaurant Service to merely provide the restaurant data to another service that implements the findAvailableRestaurants() query operation and is most likely owned by the Order Service team. As with the findOrderHistory() query operation, and when needing to maintain geospatial index, there’s a requirement to maintain an eventually consistent replica of some data in order to implement a query. Let’s look at how to accomplish that using CQRS.

7.2.2. CQRS 概述

7.2.2. Overview of CQRS

第 7.2.1 节中描述的示例强调了在微服务架构中实现查询时经常遇到的三个问题:

The examples described in section 7.2.1 highlighted three problems that are commonly encountered when implementing queries in a microservice architecture:

  • 使用 API 组合模式检索分散在多个服务中的数据会导致内存成本高昂且效率低下 加入。
  • Using the API composition pattern to retrieve data scattered across multiple services results in expensive, inefficient in-memory joins.
  • 拥有数据的服务将数据存储在无法有效支持所需查询的表单或数据库中。
  • The service that owns the data stores the data in a form or in a database that doesn’t efficiently support the required query.
  • 需要分离关注点意味着拥有数据的服务不是应该实现查询的服务 操作。
  • The need to separate concerns means that the service that owns the data isn’t the service that should implement the query operation.

所有这三个问题的解决方案都是使用 CQRS 模式。

The solution to all three of these problems is to use the CQRS pattern.

CQRS 将命令与查询分开

命令查询责任分离,顾名思义,就是关于分离,或者说关注点的分离。如图 7.8 所示,它将持久化数据模型和使用它的模块分为两部分:命令端和查询端。 命令端模块和数据模型实现创建、更新和删除操作(缩写为 CUD,例如 HTTP POSTs、PUT 和 DELETEs)。查询端模块和数据模型实现查询(例如 HTTP GET)。查询端保持 它的数据模型通过订阅命令端发布的事件与命令端数据模型同步。

Command Query Responsibility Segregation, as the name suggests, is all about segregation, or the separation of concerns. As figure 7.8 shows, it splits a persistent data model and the modules that use it into two parts: the command side and the query side. The command side modules and data model implement create, update, and delete operations (abbreviated CUD—for example, HTTP POSTs, PUTs, and DELETEs). The query-side modules and data model implement queries (such as HTTP GETs). The query side keeps its data model synchronized with the command-side data model by subscribing to the events published by the command side.

图 7.8.左侧是服务的非 CQRS 版本,右侧是 CQRS 版本。CQRS 将服务重构为 command-side 和 query-side 模块,它们具有单独的数据库。

该服务的非 CQRS 和 CQRS 版本都有一个由各种 CRUD 操作组成的 API。在非基于 CQRS 的 service,这些操作通常由映射到数据库的域模型实现。为了性能,一些 查询可能会绕过域模型并直接访问数据库。单个持久数据模型支持这两个命令 和查询。

Both the non-CQRS and CQRS versions of the service have an API consisting of various CRUD operations. In a non-CQRS-based service, those operations are typically implemented by a domain model that’s mapped to a database. For performance, a few queries might bypass the domain model and access the database directly. A single persistent data model supports both commands and queries.

在基于 CQRS 的服务中,命令端域模型处理 CRUD 操作并映射到其自己的数据库。它也可能 处理简单查询,例如非联接、基于主键的查询。命令端在其 数据更改。这些事件可以使用 Eventuate Tram 等框架或使用事件溯源进行发布。

In a CQRS-based service, the command-side domain model handles CRUD operations and is mapped to its own database. It may also handle simple queries, such as non-join, primary key-based queries. The command side publishes domain events whenever its data changes. These events might be published using a framework such as Eventuate Tram or using event sourcing.

单独的查询模型处理重要的查询。它比命令端简单得多,因为它不负责 用于实施业务规则。查询端使用对它必须的查询有意义的任何类型的数据库 支持。查询端具有订阅域事件并更新一个或多个数据库的事件处理程序。可能有 甚至是多个查询模型,每种类型的查询一个模型。

A separate query model handles the nontrivial queries. It’s much simpler than the command side because it’s not responsible for implementing the business rules. The query side uses whatever kind of database makes sense for the queries that it must support. The query side has event handlers that subscribe to domain events and update the database or databases. There may even be multiple query models, one for each type of query.

CQRS 和仅查询服务

CQRS 不仅可以在服务中应用,还可以使用此模式来定义查询服务。查询服务 具有仅包含查询操作的 API,不包含命令操作。它通过查询数据库来实现查询操作 它通过订阅一个或多个其他服务发布的事件来保持最新状态。查询端服务是一种很好的 实现通过订阅多个服务发布的事件构建的视图的方法。这种观点不属于 添加到任何特定服务中,因此将其作为独立服务实现是有意义的。此类服务的一个很好的示例是 ,它是一个实现查询操作的查询服务。如图 7.9 所示,此服务订阅由多个服务发布的事件,包括 、 等。Order History ServicefindOrderHistory()Order ServiceDelivery Service

Not only can CQRS be applied within a service, but you can also use this pattern to define query services. A query service has an API consisting of only query operations—no command operations. It implements the query operations by querying a database that it keeps up-to-date by subscribing to events published by one or more other services. A query-side service is a good way to implement a view that’s built by subscribing to events published by multiple services. This kind of view doesn’t belong to any particular service, so it makes sense to implement it as a standalone service. A good example of such a service is Order History Service, which is a query service that implements the findOrderHistory() query operation. As figure 7.9 shows, this service subscribes to events published by several services, including Order Service, Delivery Service, and so on.

图 7.9.的设计,它是一个查询端服务。它通过查询数据库来实现查询操作,该数据库通过订阅多个其他服务发布的事件来维护该数据库。Order History ServicefindOrderHistory()

Order History Service具有事件处理程序,用于订阅由多个服务发布的事件并更新 .我在 Section 7.4 中更详细地描述了这个服务的实现。Order History View Database

Order History Service has event handlers that subscribe to events published by several services and update the Order History View Database. I describe the implementation of this service in more detail in section 7.4.

查询服务也是实现视图的好方法,该视图可根据需要复制单个服务拥有的数据 分离关注点不是该服务的一部分。例如,FTGO 开发人员可以定义一个 ,它实现前面描述的查询操作。它订阅 由 发布的事件并更新专为高效地理空间查询而设计的数据库。Available Restaurants ServicefindAvailableRestaurants()Restaurant Service

A query service is also a good way to implement a view that replicates data owned by a single service yet because of the need to separate concerns isn’t part of that service. For example, the FTGO developers can define an Available Restaurants Service, which implements the findAvailableRestaurants() query operation described earlier. It subscribes to events published by Restaurant Service and updates a database designed for efficient geospatial queries.

在许多方面,CQRS 是使用 RDBMS 作为记录系统和文本的流行方法的基于事件的泛化 搜索引擎(例如 Elasticsearch)来处理文本查询。不同的是,CQRS 使用更广泛的数据库类型,而不仅仅是文本搜索引擎。此外,CQRS 查询端视图通过订阅事件近乎实时地更新。

In many ways, CQRS is an event-based generalization of the popular approach of using RDBMS as the system of record and a text search engine, such as Elasticsearch, to handle text queries. What’s different is that CQRS uses a broader range of database types—not just a text search engine. Also, CQRS query-side views are updated in near real time by subscribing to events.

现在让我们看看 CQRS 的优点和缺点。

Let’s now look at the benefits and drawbacks of CQRS.

7.2.3. CQRS 的优势

7.2.3. The benefits of CQRS

CQRS 既有优点也有缺点。好处如下:

CQRS has both benefits and drawbacks. The benefits are as follows:

  • 支持在微服务架构中高效实施查询
  • Enables the efficient implementation of queries in a microservice architecture
  • 支持高效实施各种查询
  • Enables the efficient implementation of diverse queries
  • 允许在基于事件溯源的应用程序中进行查询
  • Makes querying possible in an event sourcing-based application
  • 改善关注点分离
  • Improves separation of concerns
支持在微服务架构中高效实施查询

CQRS 模式的一个优点是,它可以有效地实现检索多个服务拥有的数据的查询。如 如前所述,使用 API 组合模式实现查询有时会导致内存中成本高昂、效率低下 大型数据集的联接。对于这些查询,使用预先联接数据的易于查询的 CQRS 视图会更有效 从两个或多个服务。

One benefit of the CQRS pattern is that it efficiently implements queries that retrieve data owned by multiple services. As described earlier, using the API composition pattern to implement queries sometimes results in expensive, inefficient in-memory joins of large datasets. For those queries, it’s more efficient to use an easily queried CQRS view that pre-joins the data from two or more services.

支持高效实施各种查询

CQRS 的另一个好处是,它使应用程序或服务能够有效地实现一组不同的查询。尝试 使用单个持久性数据模型支持所有查询通常具有挑战性,在某些情况下甚至是不可能的。一些 NoSQL 数据库的查询能力非常有限。即使数据库具有支持特定类型查询的扩展, 使用专用数据库通常效率更高。CQRS 模式通过定义 一个或多个视图,每个视图都有效地实现特定查询。

Another benefit of CQRS is that it enables an application or service to efficiently implement a diverse set of queries. Attempting to support all queries using a single persistent data model is often challenging and in some cases impossible. Some NoSQL databases have very limited querying capabilities. Even when a database has extensions to support a particular kind of query, using a specialized database is often more efficient. The CQRS pattern avoids the limitations of a single datastore by defining one or more views, each of which efficiently implements specific queries.

在基于事件溯源的应用程序中启用查询

CQRS 还克服了事件溯源的一个主要限制。事件存储仅支持基于主键的查询。The CQRS pattern 通过定义聚合的一个或多个视图来解决此限制,这些视图通过订阅来保持最新状态 添加到基于事件源的聚合发布的事件流中。因此,基于事件溯源的应用程序 始终使用 CQRS。

CQRS also overcomes a major limitation of event sourcing. An event store only supports primary key-based queries. The CQRS pattern addresses this limitation by defining one or more views of the aggregates, which are kept up-to-date, by subscribing to the streams of events that are published by the event sourcing-based aggregates. As a result, an event sourcing-based application invariably uses CQRS.

改善关注点分离

CQRS 的另一个好处是它分离了关注点。域模型及其相应的持久化数据模型不处理 命令和查询。CQRS 模式为命令端和查询端定义了单独的代码模块和数据库架构 的服务。通过分离关注点,命令端和查询端可能会更简单、更易于维护。

Another benefit of CQRS is that it separates concerns. A domain model and its corresponding persistent data model don’t handle both commands and queries. The CQRS pattern defines separate code modules and database schemas for the command and query sides of a service. By separating concerns, the command side and query side are likely to be simpler and easier to maintain.

此外,CQRS 使实现查询的服务与拥有数据的服务不同。例如 前面我介绍了即使拥有查询操作查询的数据,另一个服务实现如此关键的高容量查询也是有意义的。CQRS 查询服务通过订阅一个或多个服务发布的事件来维护视图 拥有数据。Restaurant ServicefindAvailableRestaurants

Moreover, CQRS enables the service that implements a query to be different than the service that owns the data. For example, earlier I described how even though Restaurant Service owns the data that’s queried by the findAvailableRestaurants query operation, it makes sense for another service to implement such a critical, high-volume query. A CQRS query service maintains a view by subscribing to the events published by the service or services that own the data.

7.2.4. CQRS 的缺点

7.2.4. The drawbacks of CQRS

尽管 CQRS 有几个优点,但它也有明显的缺点:

Even though CQRS has several benefits, it also has significant drawbacks:

  • 更复杂的架构
  • More complex architecture
  • 处理复制滞后
  • Dealing with the replication lag

让我们看看这些缺点,从增加的复杂性开始。

Let’s look at these drawbacks, starting with the increased complexity.

更复杂的架构

CQRS 的一个缺点是它增加了复杂性。开发人员必须编写用于更新和查询视图的查询端服务。 管理和操作额外的数据存储也存在额外的操作复杂性。此外,还有一个应用程序 可能会使用不同类型的数据库,这进一步增加了开发人员和运营的复杂性。

One drawback of CQRS is that it adds complexity. Developers must write the query-side services that update and query the views. There is also the extra operational complexity of managing and operating the extra datastores. What’s more, an application might use different types of databases, which adds further complexity for both developers and operations.

处理复制滞后

CQRS 的另一个缺点是处理命令端和查询端视图之间的“滞后”。如您所料, 命令端发布事件与查询端处理该事件之间存在延迟,而 视图已更新。更新聚合然后立即查询视图的客户端应用程序可能会看到以前的版本 的聚合。它通常必须以避免向用户公开这些潜在不一致的方式编写。

Another drawback of CQRS is dealing with the “lag” between the command-side and the query-side views. As you might expect, there’s delay between when the command side publishes an event and when that event is processed by the query side and the view updated. A client application that updates an aggregate and then immediately queries a view may see the previous version of the aggregate. It must often be written in a way that avoids exposing these potential inconsistencies to the user.

一种解决方案是让命令端和查询端 API 向客户端提供版本信息,使其能够 告诉 query 端已过期。客户端可以轮询查询端视图,直到它是最新的。稍后我将讨论 服务 API 如何使客户端能够执行此操作。

One solution is for the command-side and query-side APIs to supply the client with version information that enables it to tell that the query side is out-of-date. A client can poll the query-side view until it’s up-to-date. Shortly I’ll discuss how the service APIs can enable a client to do this.

UI 应用程序(如本机移动应用程序或单页 JavaScript 应用程序)可以通过更新 其本地模型,而不发出查询。例如,它可以使用返回的数据更新其模型 通过命令。希望当用户操作触发查询时,视图将是最新的。这种方法的一个缺点 是 UI 代码可能需要复制服务器端代码才能更新其模型。

A UI application such as a native mobile application or single page JavaScript application can handle replication lag by updating its local model once the command is successful without issuing a query. It can, for example, update its model using data returned by the command. Hopefully, when a user action triggers a query, the view will be up-to-date. One drawback of this approach is that the UI code may need to duplicate server-side code in order to update its model.

如您所见,CQRS 既有优点也有缺点。如前所述,您应该尽可能使用 API 组合 并仅在必要时使用 CQRS。

As you can see, CQRS has both benefits and drawbacks. As mentioned earlier, you should use the API composition whenever possible and use CQRS only when you must.

现在,您已经了解了 CQRS 的优缺点,现在让我们看看如何设计 CQRS 视图。

Now that you’ve seen the benefits and drawbacks of CQRS, let’s now look at how to design CQRS views.

7.3. 设计 CQRS 视图

7.3. Designing CQRS views

CQRS 视图模块具有一个 API,该 API 由一个或多个查询操作组成。它通过查询 数据库,它通过订阅一个或多个服务发布的事件来维护。如图 7.10 所示,视图模块由一个视图数据库和三个子模块组成。

A CQRS view module has an API consisting of one more query operations. It implements these query operations by querying a database that it maintains by subscribing to events published by one or more services. As figure 7.10 shows, a view module consists of a view database and three submodules.

图 7.10.CQRS 视图模块的设计。事件处理程序更新视图数据库,该数据库由 Query API 模块查询。

数据访问模块实现了数据库访问逻辑。事件处理程序和查询 API 模块使用数据访问 模块来更新和查询数据库。事件处理程序模块订阅事件并更新数据库。查询 API 模块实现查询 API。

The data access module implements the database access logic. The event handlers and query API modules use the data access module to update and query the database. The event handlers module subscribes to events and updates the database. The query API module implements the query API.

在开发视图模块时,您必须做出一些重要的设计决策:

You must make some important design decisions when developing a view module:

  • 您必须选择一个数据库并设计架构。
  • You must choose a database and design the schema.
  • 在设计数据访问模块时,您必须解决各种问题,包括确保更新是幂等的,以及 处理并发更新。
  • When designing the data access module, you must address various issues, including ensuring that updates are idempotent and handling concurrent updates.
  • 在现有应用程序中实现新视图或更改现有应用程序的架构时,必须实现 一种用于高效构建或重建视图的机制。
  • When implementing a new view in an existing application or changing the schema of an existing application, you must implement a mechanism to efficiently build or rebuild the view.
  • 您必须决定如何使视图的客户端能够处理前面描述的复制滞后。
  • You must decide how to enable a client of the view to cope with the replication lag, described earlier.

让我们看看这些问题中的每一个。

Let’s look at each of these issues.

7.3.1. 选择视图数据存储

7.3.1. Choosing a view datastore

一个关键的设计决策是数据库的选择和架构的设计。数据库的主要用途和 data model 的模型是为了高效实现 View 模块的 Query 操作。正是这些查询的特征 是选择数据库时的主要考虑因素。但数据库还必须有效地实现更新操作 由事件处理程序执行。

A key design decision is the choice of database and the design of the schema. The primary purpose of the database and the data model is to efficiently implement the view module’s query operations. It’s the characteristics of those queries that are the primary consideration when selecting a database. But the database must also efficiently implement the update operations performed by the event handlers.

SQL 与 NoSQL 数据库

不久前,只有一种类型的数据库可以统治所有这些数据库:基于 SQL 的 RDBMS。然而,随着 Web 的普及, 许多公司发现 RDBMS 无法满足其 Web 规模要求。这导致了 所谓的 NoSQL 数据库。NoSQL 数据库通常具有有限形式的事务和不太通用的查询功能。对于某些使用案例,这些数据库 与 SQL 数据库相比,具有一定的优势,包括更灵活的数据模型以及更好的性能和可扩展性。

Not that long ago, there was one type of database to rule them all: the SQL-based RDBMS. As the Web grew in popularity, though, various companies discovered that an RDBMS couldn’t satisfy their web scale requirements. That led to the creation of the so-called NoSQL databases. A NoSQL database typically has a limited form of transactions and less general querying capabilities. For certain use cases, these databases have certain advantages over SQL databases, including a more flexible data model and better performance and scalability.

NoSQL 数据库通常是 CQRS 视图的不错选择,它可以利用其优势并忽略其弱点。一个 CQRS View 受益于 NoSQL 数据库更丰富的数据模型和性能。它不受 NoSQL 限制的影响 数据库,因为它只使用简单事务并执行一组固定的查询。

A NoSQL database is often a good choice for a CQRS view, which can leverage its strengths and ignore its weaknesses. A CQRS view benefits from the richer data model, and performance of a NoSQL database. It’s unaffected by the limitations of a NoSQL database, because it only uses simple transactions and executes a fixed set of queries.

话虽如此,有时使用 SQL 数据库实现 CQRS 视图是有意义的。在现代 硬件具有出色的性能。一般来说,开发人员、数据库管理员和 IT 运营人员要熟悉得多 与 NoSQL 数据库相比。如前所述,SQL 数据库通常具有非关系数据库的扩展 功能,例如地理空间数据类型和查询。此外,CQRS 视图可能需要使用 SQL 数据库才能支持 一个报告引擎。

Having said that, sometimes it makes sense to implement a CQRS view using a SQL database. A modern RDBMS running on modern hardware has excellent performance. Developers, database administrators, and IT operations are, in general, much more familiar with SQL databases than they are with NoSQL databases. As mentioned earlier, SQL databases often have extensions for non-relational features, such as geospatial datatypes and queries. Also, a CQRS view might need to use a SQL database in order to support a reporting engine.

表 7.1 所示,有许多不同的选项可供选择。而使选择更加复杂的是,两者之间的差异 不同类型的数据库开始变得模糊。例如,作为 RDBMS 的 MySQL 对 JSON 有很好的支持。 这是 MongoDB 的优势之一,MongoDB 是一种 JSON 风格的面向文档的数据库。

As you can see in table 7.1, there are lots of different options to choose from. And to make the choice even more complicated, the differences between the different types of database are starting to blur. For example, MySQL, which is an RDBMS, has excellent support for JSON, which is one of the strengths of MongoDB, a JSON-style document-oriented database.

表 7.1.查询端视图存储

如果您需要

If you need

Use

Example

基于 PK 的 JSON 对象查找 文档存储(如 MongoDB 或 DynamoDB)或键值存储(如 Redis) 通过维护包含每个客户的 MongoDB 文档来实施订单历史记录。
基于查询的 JSON 对象查找 文档存储,如 MongoDB 或 DynamoDB 使用 MongoDB 或 DynamoDB 实施客户视图。
文本查询 文本搜索引擎,例如 Elasticsearch 通过维护每个订单的 Elasticsearch 文档来实施订单的文本搜索。
图形查询 图形数据库,如 Neo4j 通过维护客户、订单和其他数据的图表来实施欺诈检测。
传统 SQL 报告/BI 一个 RDBMS 标准业务报告和分析。

现在,我已经讨论了可用于实现 CQRS 视图的不同类型的数据库,让我们看看 如何有效地更新视图。

Now that I’ve discussed the different kinds of databases you can use to implement a CQRS view, let’s look at the problem of how to efficiently update a view.

支持更新操作

除了有效地实现查询外,视图数据模型还必须有效地实现执行的更新操作 通过事件处理程序。通常,事件处理程序将使用其主键更新或删除视图数据库中的记录。例如,很快我将描述设计 的 CQRS 视图。它使用 作为主键将每个记录存储为数据库记录。当此视图从 收到事件时,它可以直接更新相应的记录。findOrderHistory()OrderorderIdOrder Service

Besides efficiently implementing queries, the view data model must also efficiently implement the update operations executed by the event handlers. Usually, an event handler will update or delete a record in the view database using its primary key. For example, soon I’ll describe the design of a CQRS view for the findOrderHistory() query. It stores each Order as a database record using the orderId as the primary key. When this view receives an event from Order Service, it can straightforwardly update the corresponding record.

但有时,它需要使用等效的外键来更新或删除记录。例如,考虑一下, 事件的事件处理程序。如果 a 和 an 之间存在一一对应的关系,则可能与 相同。如果是,则事件处理程序可以轻松更新订单的数据库记录。Delivery*DeliveryOrderDelivery.idOrder.idDelivery*

Sometimes, though, it will need to update or delete a record using the equivalent of a foreign key. Consider, for instance, the event handlers for Delivery* events. If there is a one-to-one correspondence between a Delivery and an Order, then Delivery.id might be the same as Order.id. If it is, then Delivery* event handlers can easily update the order’s database record.

但是,假设 a 有自己的主键,或者 an 和 a 之间存在一对多关系。某些事件(如事件)将包含 .但其他事件(如事件)可能不会。在这种情况下,的事件处理程序需要使用 as 等效于外键来更新订单的记录。DeliveryOrderDeliveryDelivery*DeliveryCreatedorderIdDeliveryPickedUpDeliveryPickedUpdeliveryId

But suppose a Delivery has its own primary key or there is a one-to-many relationship between an Order and a Delivery. Some Delivery* events, such as the DeliveryCreated event, will contain the orderId. But other events, such as a DeliveryPickedUp event, might not. In this scenario, an event handler for DeliveryPickedUp will need to update the order’s record using the deliveryId as the equivalent of a foreign key.

某些类型的数据库有效地支持基于外键的更新操作。例如,如果您使用的是 RDBMS 或 MongoDB, 您可以在必要的列上创建索引。但是,在使用其他 NOSQL 数据库。应用程序需要维护从外键到主数据库的某种特定于数据库的映射 键来确定要更新的记录。例如,使用 DynamoDB 的应用程序,它仅支持主 基于键的更新和删除必须首先查询 DynamoDB 二级索引(稍后讨论)以确定主键 中。

Some types of database efficiently support foreign-key-based update operations. For example, if you’re using an RDBMS or MongoDB, you create an index on the necessary columns. However, non-primary key-based updates are not straightforward when using other NOSQL databases. The application will need to maintain some kind of database-specific mapping from a foreign key to a primary key in order to determine which record to update. For example, an application that uses DynamoDB, which only supports primary key-based updates and deletes, must first query a DynamoDB secondary index (discussed shortly) to determine the primary keys of the items to update or delete.

7.3.2. 数据访问模块设计

7.3.2. Data access module design

事件处理程序和查询 API 模块不直接访问数据存储。相反,他们使用数据访问模块 它由数据访问对象 (DAO) 及其帮助程序类组成。DAO 有几项职责。它实现了 事件处理程序调用的 Update 操作和 Query 模块调用的 Query 操作。DAO 映射 更高级别的代码和数据库 API 使用的数据类型。它还必须处理并发更新并确保 updates 是幂等的。

The event handlers and the query API module don’t access the datastore directly. Instead they use the data access module, which consists of a data access object (DAO) and its helper classes. The DAO has several responsibilities. It implements the update operations invoked by the event handlers and the query operations invoked by the query module. The DAO maps between the data types used by the higher-level code and the database API. It also must handle concurrent updates and ensure that updates are idempotent.

让我们看看这些问题,从如何处理并发更新开始。

Let’s look at these issues, starting with how to handle concurrent updates.

处理并发

有时,DAO 必须处理对同一数据库记录的多个并发更新的可能性。如果视图订阅 对于由单个聚合类型发布的事件,不会有任何并发问题。这是因为 将按顺序处理特定的聚合实例。因此,与聚合实例对应的记录不会 同时更新。但是,如果视图订阅了多个聚合类型发布的事件,则可能会 多个事件处理程序同时更新同一记录。

Sometimes a DAO must handle the possibility of multiple concurrent updates to the same database record. If a view subscribes to events published by a single aggregate type, there won’t be any concurrency issues. That’s because events published by a particular aggregate instance are processed sequentially. As a result, a record corresponding to an aggregate instance won’t be updated concurrently. But if a view subscribes to events published by multiple aggregate types, then it’s possible that multiple events handlers update the same record simultaneously.

例如,事件的事件处理程序可能与相同顺序的事件的事件处理程序同时调用。然后,两个事件处理程序同时调用 DAO 来更新该 A DAO 的数据库记录,必须以确保正确处理这种情况的方式编写。它不得允许一个更新覆盖 另一个。如果 DAO 通过读取记录然后写入更新的记录来实现更新,则必须使用悲观 或乐观锁定。在下一节中,您将看到一个 DAO 示例,该示例通过更新数据库来处理并发更新 记录,而无需先读取它们。Order*Delivery*Order.

For example, an event handler for an Order* event might be invoked at the same time as an event handler for a Delivery* event for the same order. Both event handlers then simultaneously invoke the DAO to update the database record for that Order. A DAO must be written in a way that ensures that this situation is handled correctly. It must not allow one update to overwrite another. If a DAO implements updates by reading a record and then writing the updated record, it must use either pessimistic or optimistic locking. In the next section you’ll see an example of a DAO that handles concurrent updates by updating database records without reading them first.

幂等事件处理程序

第 3 章所述,可以使用同一事件多次调用事件处理程序。如果查询端事件 handler 是幂等的。如果处理重复事件导致正确的结果,则事件处理程序是幂等的。在 最坏的情况是,View 数据存储将暂时过期。例如,维护视图的事件处理程序可能会使用图 7.11 中所示的(公认不太可能的)事件序列来调用:、、 和 。在第一次传送 and 事件后,消息代理(可能是由于网络错误)开始传送来自较早 时间点,因此重新传递和 .Order HistoryDeliveryPickedUpDeliveryDeliveredDeliveryPickedUpDeliveryDeliveredDeliveryPickedUpDeliveryDeliveredDeliveryPickedUpDeliveryDelivered

As mentioned in chapter 3, an event handler may be invoked with the same event more than once. This is generally not a problem if a query-side event handler is idempotent. An event handler is idempotent if handling duplicate events results in the correct outcome. In the worst case, the view datastore will temporarily be out-of-date. For example, an event handler that maintains the Order History view might be invoked with the (admittedly improbable) sequence of events shown in figure 7.11: DeliveryPickedUp, DeliveryDelivered, DeliveryPickedUp, and DeliveryDelivered. After delivering the DeliveryPickedUp and DeliveryDelivered events the first time, the message broker, perhaps because of a network error, starts delivering the events from an earlier point in time, and so redelivers DeliveryPickedUp and DeliveryDelivered.

图 7.11.and 事件被传递两次,这会导致 view 中的 order 状态暂时过期。DeliveryPickedUpDeliveryDelivered

在事件处理程序处理第二个事件后,视图暂时包含 的过期状态,直到 被处理。如果此行为是不需要的,则事件处理程序应检测并丢弃重复事件,例如 非幂等事件处理程序。DeliveryPickedUpOrder HistoryOrderDeliveryDelivered

After the event handler processes the second DeliveryPickedUp event, the Order History view temporarily contains the out-of-date state of the Order until the DeliveryDelivered is processed. If this behavior is undesirable, then the event handler should detect and discard duplicate events, like a non-idempotent event handler.

如果重复事件导致不正确的结果,则事件处理程序不是幂等的。例如,递增 银行账户的余额不是幂等的。如第 3 章所述,非幂等事件处理程序必须通过记录它在视图数据存储中处理的事件的 ID 来检测和丢弃重复事件。

An event handler isn’t idempotent if duplicate events result in an incorrect outcome. For example, an event handler that increments the balance of a bank account isn’t idempotent. A non-idempotent event handler must, as explained in chapter 3, detect and discard duplicate events by recording the IDs of events that it has processed in the view datastore.

为了可靠,事件处理程序必须记录事件 ID 并自动更新数据存储。如何执行此操作取决于 在数据库类型上。如果视图数据库存储是 SQL 数据库,则事件处理程序可以将已处理的事件插入到 一个表作为更新视图的事务的一部分。但是,如果视图数据存储是一个 NoSQL 数据库,该数据库的 transaction 模型,则事件处理程序必须将事件保存在数据存储“记录”(例如,MongoDB 文档或 DynamoDB table 项)进行更新。PROCESSED_EVENTS

In order to be reliable, the event handler must record the event ID and update the datastore atomically. How to do this depends on the type of database. If the view database store is a SQL database, the event handler could insert processed events into a PROCESSED_EVENTS table as part of the transaction that updates the view. But if the view datastore is a NoSQL database that has a limited transaction model, the event handler must save the event in the datastore “record” (for example, a MongoDB document or DynamoDB table item) that it updates.

请务必注意,事件处理程序不需要记录每个事件的 ID。如果像 Eventuate 一样, events 的 ID 单调递增,则每条记录只需要存储从给定聚合实例接收的 ID。此外,如果记录对应于单个聚合实例,则 事件处理程序只需要 record .只有表示来自多个聚合的事件联接的记录必须包含 map from to 。max(eventId)max(eventId)[aggregate type, aggregate id]max(eventId)

It’s important to note that the event handler doesn’t need to record the ID of every event. If, as is the case with Eventuate, events have a monotonically increasing ID, then each record only needs to store the max(eventId) that’s received from a given aggregate instance. Furthermore, if the record corresponds to a single aggregate instance, then the event handler only needs to record max(eventId). Only records that represent joins of events from multiple aggregates must contain a map from [aggregate type, aggregate id] to max(eventId).

例如,您很快就会看到视图的 DynamoDB 实现包含具有用于跟踪事件的属性的项目,如下所示:Order History

For example, you’ll soon see that the DynamoDB implementation of the Order History view contains items that have attributes for tracking events that look like this:

{...
      "Order3949384394-039434903" : "0000015e0c6fc18f-0242ac1100e50002",
      "Delivery3949384394-039434903" : "0000015e0c6fc264-0242ac1100e50002",
   }
{...
      "Order3949384394-039434903" : "0000015e0c6fc18f-0242ac1100e50002",
      "Delivery3949384394-039434903" : "0000015e0c6fc264-0242ac1100e50002",
   }

此视图是各种服务发布的事件的联接。每个事件跟踪属性的名称都是 ,值是 。稍后,我将更详细地介绍其工作原理。«aggregateType»«aggregateId»eventId

This view is a join of events published by various services. The name of each of these event-tracking attributes is «aggregateType»«aggregateId», and the value is the eventId. Later on, I describe how this works in more detail.

使客户端应用程序能够使用最终一致性视图

正如我之前所说,使用 CQRS 的一个问题是,客户端会更新命令端,然后立即执行 查询可能看不到自己的更新。由于消息传递不可避免的延迟,视图最终是一致的 基础设施。

As I said earlier, one issue with using CQRS is that a client that updates the command side and then immediately executes a query might not see its own update. The view is eventually consistent because of the unavoidable latency of the messaging infrastructure.

命令和查询模块 API 可以使客户端使用以下方法检测不一致。命令端 操作将包含已发布事件 ID 的令牌返回给客户端。然后,客户端将令牌传递给查询 操作,如果该事件尚未更新视图,则返回错误。视图模块可以实现此机制 使用重复事件检测机制。

The command and query module APIs can enable the client to detect an inconsistency using the following approach. A command-side operation returns a token containing the ID of the published event to the client. The client then passes the token to a query operation, which returns an error if the view hasn’t been updated by that event. A view module can implement this mechanism using the duplicate event-detection mechanism.

7.3.3. 添加和更新 CQRS 视图

7.3.3. Adding and updating CQRS views

CQRS 视图将在应用程序的整个生命周期内添加和更新。有时您需要添加新视图以支持 新查询。在其他时候,您可能需要重新创建视图,因为架构已更改,或者您需要修复 更新视图的代码。

CQRS views will be added and updated throughout the lifetime of an application. Sometimes you need to add a new view to support a new query. At other times you might need to re-create a view because the schema has changed or you need to fix a bug in code that updates the view.

从概念上讲,添加和更新视图非常简单。要创建新视图,您需要开发查询端模块,设置 datastore 并部署服务。查询端模块的事件处理程序处理所有事件,最终视图将是最新的。同样,更新现有的 View 在概念上也很简单:更改事件处理程序并从头开始重新构建 View。然而,问题是 这种方法在实践中不太可能奏效。让我们看看问题。

Adding and updating views is conceptually quite simple. To create a new view, you develop the query-side module, set up the datastore, and deploy the service. The query side module’s event handlers process all the events, and eventually the view will be up-to-date. Similarly, updating an existing view is also conceptually simple: you change the event handlers and rebuild the view from scratch. The problem, however, is that this approach is unlikely to work in practice. Let’s look at the issues.

使用存档事件构建 CQRS 视图

一个问题是消息代理不能无限期地存储消息。传统的消息代理(如 RabbitMQ 删除) 消息。甚至更现代的代理,例如 Apache Kafka,它们保留消息 可配置的保留期,并非旨在无限期存储事件。因此,视图不能仅由 从 Message Broker 读取所有需要的事件。相反,应用程序还必须读取已 存档在 AWS S3 中。您可以使用可扩展的大数据技术(如 Apache Spark)来实现此目的。

One problem is that message brokers can’t store messages indefinitely. Traditional message brokers such as RabbitMQ delete a message once it’s been processed by a consumer. Even more modern brokers such as Apache Kafka, that retain messages for a configurable retention period, aren’t intended to store events indefinitely. As a result, a view can’t be built by only reading all the needed events from the message broker. Instead, an application must also read older events that have been archived in, for example, AWS S3. You can do this by using a scalable big data technology such as Apache Spark.

以增量方式构建 CQRS 视图

视图创建的另一个问题是,处理所有事件所需的时间和资源会随着时间的推移而不断增长。最终 视图创建将变得太慢且成本高昂。解决方案是使用两步增量算法。第一步 根据每个聚合实例的上一个快照和此后发生的事件定期计算每个聚合实例的快照 该快照已创建。第二步使用快照和任何后续事件创建视图。

Another problem with view creation is that the time and resources required to process all events keep growing over time. Eventually, view creation will become too slow and expensive. The solution is to use a two-step incremental algorithm. The first step periodically computes a snapshot of each aggregate instance based on its previous snapshot and events that have occurred since that snapshot was created. The second step creates a view using the snapshots and any subsequent events.

7.4. 使用 AWS DynamoDB 实现 CQRS 视图

7.4. Implementing a CQRS view with AWS DynamoDB

现在我们已经了解了使用 CQRS 时必须解决的各种设计问题,让我们考虑一个示例。本节 介绍如何使用 DynamoDB 为操作实现 CQRS 视图。AWS DynamoDB 是一种可扩展的 NoSQL 数据库,可在 Amazon 云上作为服务使用。这 DynamoDB 数据模型由表组成,这些表包含的项目(如 JSON 对象)是分层名称-值的集合 对。AWS DynamoDB 是一个完全托管的数据库,您可以动态扩展和缩减表的吞吐容量。findOrderHistory()

Now that we’ve looked at the various design issues you must address when using CQRS, let’s consider an example. This section describes how to implement a CQRS view for the findOrderHistory() operation using DynamoDB. AWS DynamoDB is a scalable, NoSQL database that’s available as a service on the Amazon cloud. The DynamoDB data model consists of tables that contain items that, like JSON objects, are collections of hierarchical name-value pairs. AWS DynamoDB is a fully managed database, and you can scale the throughput capacity of a table up and down dynamically.

的 CQRS 视图使用来自多个服务的事件,因此它作为独立的 .该服务有一个 API,用于实现两个操作:和 .尽管可以使用 API 组合实现,但此视图免费提供此操作。图 7.12 显示了该服务的设计。 的结构为一组模块,每个模块都实现特定的职责,以简化开发 和测试。每个模块的职责如下:findOrderHistory()Order View ServicefindOrderHistory()findOrder()findOrder()Order History Service

The CQRS view for the findOrderHistory() consumes events from multiple services, so it’s implemented as a standalone Order View Service. The service has an API that implements two operations: findOrderHistory() and findOrder(). Even though findOrder() can be implemented using API composition, this view provides this operation for free. Figure 7.12 shows the design of the service. Order History Service is structured as a set of modules, each of which implements a particular responsibility in order to simplify development and testing. The responsibility of each module is as follows:

  • OrderHistoryEventHandlers- 订阅各种服务发布的事件,并调用OrderHistoryDAO
  • OrderHistoryEventHandlers—Subscribes to events published by the various services and invokes the OrderHistoryDAO
  • OrderHistoryQuery API module — 实现前面描述的 REST 端点
  • OrderHistoryQuery API module—Implements the REST endpoints described earlier
  • OrderHistoryDataAccess— 包含 ,它定义更新和查询 DynamoDB 表及其帮助程序类的方法OrderHistoryDAOftgo-order-history
  • OrderHistoryDataAccess—Contains the OrderHistoryDAO, which defines the methods that update and query the ftgo-order-history DynamoDB table and its helper classes
  • ftgo-order-history DynamoDB table (DynamoDB 表) – 存储订单的表
  • ftgo-order-history DynamoDB table—The table that stores the orders
图 7.12.的设计 . 更新数据库以响应事件。该模块通过查询数据库来实现查询操作。这两个模块使用模块来访问数据库。OrderHistoryServiceOrderHistoryEventHandlersOrderHistoryQueryOrderHistoryDataAccess

让我们更详细地了解一下事件处理程序、DAO 和 DynamoDB 表的设计。

Let’s look at the design of the event handlers, the DAO, and the DynamoDB table in more detail.

7.4.1. OrderHistoryEventHandlers 模块

7.4.1. The OrderHistoryEventHandlers module

此模块由使用事件和更新 DynamoDB 表的事件处理程序组成。如下面的清单所示, 事件处理程序是简单的方法。每个方法都是一个单行代码,它使用派生自事件的参数调用方法。OrderHistoryDao

This module consists of the event handlers that consume events and update the DynamoDB table. As the following listing shows, the event handlers are simple methods. Each method is a one-liner that invokes an OrderHistoryDao method with arguments that are derived from the event.

清单 7.1.调用OrderHistoryDao
public class OrderHistoryEventHandlers {

  private OrderHistoryDao orderHistoryDao;

  public OrderHistoryEventHandlers(OrderHistoryDao orderHistoryDao) {
    this.orderHistoryDao = orderHistoryDao;
  }

  public void handleOrderCreated(DomainEventEnvelope<OrderCreated> dee) {
    orderHistoryDao.addOrder(makeOrder(dee.getAggregateId(), dee.getEvent()),
                              makeSourceEvent(dee));
  }

  private Order makeOrder(String orderId, OrderCreatedEvent event) {
    ...
  }

  public void handleDeliveryPickedUp(DomainEventEnvelope<DeliveryPickedUp>
                                             dee) {
   orderHistoryDao.notePickedUp(dee.getEvent().getOrderId(),
           makeSourceEvent(dee));
  }

  ...
public class OrderHistoryEventHandlers {

  private OrderHistoryDao orderHistoryDao;

  public OrderHistoryEventHandlers(OrderHistoryDao orderHistoryDao) {
    this.orderHistoryDao = orderHistoryDao;
  }

  public void handleOrderCreated(DomainEventEnvelope<OrderCreated> dee) {
    orderHistoryDao.addOrder(makeOrder(dee.getAggregateId(), dee.getEvent()),
                              makeSourceEvent(dee));
  }

  private Order makeOrder(String orderId, OrderCreatedEvent event) {
    ...
  }

  public void handleDeliveryPickedUp(DomainEventEnvelope<DeliveryPickedUp>
                                             dee) {
   orderHistoryDao.notePickedUp(dee.getEvent().getOrderId(),
           makeSourceEvent(dee));
  }

  ...

每个事件处理程序都有一个 type 的参数,其中包含事件和一些描述事件的元数据。例如,调用 method 来处理事件。它调用在数据库中创建一个。同样,调用该方法来处理事件。它调用以更新数据库中 的状态。DomainEventEnvelopehandleOrderCreated()OrderCreatedorderHistoryDao.addOrder()OrderhandleDeliveryPickedUp()DeliveryPickedUporderHistoryDao.notePickedUp()Order

Each event handler has a single parameter of type DomainEventEnvelope, which contains the event and some metadata describing the event. For example, the handleOrderCreated() method is invoked to handle an OrderCreated event. It calls orderHistoryDao.addOrder() to create an Order in the database. Similarly, the handleDeliveryPickedUp() method is invoked to handle a DeliveryPickedUp event. It calls orderHistoryDao.notePickedUp() to update the status of the Order in the database.

这两种方法都调用 helper 方法 ,该方法构造一个 ,其中包含发出事件的聚合的类型和 ID 以及事件 ID。在下一节中,您将看到 用于确保更新操作是幂等的。makeSourceEvent()SourceEventOrderHistoryDaoSourceEvent

Both methods call the helper method makeSourceEvent(), which constructs a SourceEvent containing the type and ID of the aggregate that emitted the event and the event ID. In the next section you’ll see that OrderHistoryDao uses SourceEvent to ensure that update operations are idempotent.

现在,让我们看看 DynamoDB 表的设计,然后检查 .OrderHistoryDao

Let’s now look at the design of the DynamoDB table and after that examine OrderHistoryDao.

7.4.2. 使用 DynamoDB 进行数据建模和查询设计

7.4.2. Data modeling and query design with DynamoDB

与许多 NoSQL 数据库一样,DynamoDB 的数据访问操作比 一个 RDBMS。因此,您必须仔细设计数据的存储方式。特别是,查询通常决定设计 的架构。我们需要解决几个设计问题:

Like many NoSQL databases, DynamoDB has data access operations that are much less powerful than those that are provided by an RDBMS. Consequently, you must carefully design how the data is stored. In particular, the queries often dictate the design of the schema. We need to address several design issues:

  • 设计表格ftgo-order-history
  • Designing the ftgo-order-history table
  • 定义查询的索引findOrderHistory
  • Defining an index for the findOrderHistory query
  • 实现查询findOrderHistory
  • Implementing the findOrderHistory query
  • 对查询结果进行分页
  • Paginating the query results
  • 更新订单
  • Updating orders
  • 检测重复事件
  • Detecting duplicate events

我们将依次查看每个 Cookie。

We’ll look at each one in turn.

设计 ftgo-order-history 表

DynamoDB 存储模型由表(包含项目)和索引(提供替代访问方法) 表的项目(稍后讨论)。是命名属性的集合。属性值可以是标量值(如字符串)、字符串的多值集合或命名属性的集合。虽然 一个项目相当于 RDBMS 中的一行,它更加灵活,并且可以存储整个聚合。

The DynamoDB storage model consists of tables, which contain items, and indexes, which provide alternative ways to access a table’s items (discussed shortly). An item is a collection of named attributes. An attribute value is either a scalar value such as a string, a multivalued collection of strings, or a collection of named attributes. Although an item is the equivalent to a row in an RDBMS, it’s a lot more flexible and can store an entire aggregate.

这种灵活性使模块能够将每个项目作为单个项目存储在名为 .类的每个字段都映射到一个 item 属性,如图 7.13 所示。简单字段(如 和 )映射到单值项属性。该字段将映射到一个属性,该属性是一个地图列表,每个时间线一个地图。它可以被认为是一个 JSON 数组 对象。OrderHistoryDataAccessOrderftgo-order-historyOrderorderCreationTimestatuslineItems

This flexibility enables the OrderHistoryDataAccess module to store each Order as a single item in a DynamoDB table called ftgo-order-history. Each field of the Order class is mapped to an item attribute, as shown in figure 7.13. Simple fields such as orderCreationTime and status are mapped to single-value item attributes. The lineItems field is mapped to an attribute that is a list of maps, one map per time line. It can be considered to be a JSON array of objects.

图 7.13.DynamoDB 表的初步结构OrderHistory

表定义的一个重要部分是其主键。DynamoDB 应用程序插入、更新和检索 按主键列出的表的项目。主键是 This enables to insert, update, and retrieve an order by 似乎是有意义的。但在最终确定此决定之前,让我们首先探讨表的主键如何影响数据访问操作的类型 它支持。orderId.Order History ServiceorderId

An important part of the definition of a table is its primary key. A DynamoDB application inserts, updates, and retrieves a table’s items by primary key. It would seem to make sense for the primary key to be orderId. This enables Order History Service to insert, update, and retrieve an order by orderId. But before finalizing this decision, let’s first explore how a table’s primary key impacts the kinds of data access operations it supports.

定义 findOrderHistory 查询的索引

此表定义支持基于主键的读取和写入。但它不支持此类查询,该查询返回按年龄递增排序的多个匹配订单。这是因为,正如您将在本节后面看到的那样,这个 query 使用 DynamoDB 操作,该操作要求表具有由两个标量属性组成的复合主键。第一个属性是分区键。之所以这样称呼分区键,是因为 DynamoDB 的 Z 轴缩放(如第 1 章所述)使用它来选择项目的存储分区。第二个属性是排序键。操作返回具有指定分区键、在指定范围内具有排序键且与 可选筛选条件表达式。它按排序键指定的顺序返回项目。OrdersfindOrderHistory()query()query()

This table definition supports primary key-based reads and writes of Orders. But it doesn’t support a query such as findOrderHistory() that returns multiple matching orders sorted by increasing age. That’s because, as you will see later in this section, this query uses the DynamoDB query() operation, which requires a table to have a composite primary key consisting of two scalar attributes. The first attribute is a partition key. The partition key is so called because DynamoDB’s Z-axis scaling (described in chapter 1) uses it to select an item’s storage partition. The second attribute is the sort key. A query() operation returns those items that have the specified partition key, have a sort key in the specified range, and match the optional filter expression. It returns items in the order specified by the sort key.

查询操作返回按年龄递增排序的消费者订单。因此,它需要一个主键,该键具有 as 分区键和 the 作为排序键。但是 table 的主键没有意义,因为它不是唯一的。findOrderHistory()consumerIdorderCreationDate(consumerId, orderCreationDate)ftgo-order-history

The findOrderHistory() query operation returns a consumer’s orders sorted by increasing age. It therefore requires a primary key that has the consumerId as the partition key and the orderCreationDate as the sort key. But it doesn’t make sense for (consumerId, orderCreationDate) to be the primary key of the ftgo-order-history table, because it’s not unique.

该解决方案用于查询 DynamoDB 对表的二级索引的调用。此索引具有 as 其非唯一键。与 RDBMS 索引一样,DynamoDB 索引的表在更新时会自动更新。但 与典型的 RDBMS 索引不同,DynamoDB 索引可以具有非键属性。非键属性可以提高性能,因为它们是由查询返回的,因此应用程序不必从表中获取它们。 此外,您很快就会看到,它们可用于筛选。图 7.14 显示了表的结构以及此索引。findOrderHistory()ftgo-order-history(consumerId, orderCreationDate)

The solution is for findOrderHistory() to query what DynamoDB calls a secondary index on the ftgo-order-history table. This index has (consumerId, orderCreationDate) as its non-unique key. Like an RDBMS index, a DynamoDB index is automatically updated whenever its table is updated. But unlike a typical RDBMS index, a DynamoDB index can have non-key attributes. Non-key attributes improve performance because they’re returned by the query, so the application doesn’t have to fetch them from the table. Also, as you’ll soon see, they can be used for filtering. Figure 7.14 shows the structure of the table and this index.

图 7.14.表和索引的设计OrderHistory

索引是表定义的一部分,称为 。索引的属性包括 primary key attributes、和 ,以及非 key 属性,包括 和 。ftgo-order-historyftgo-order-history-by-consumer-id-and-creation-timeconsumerIdorderCreationTimeorderIdstatus

The index is part of the definition of the ftgo-order-history table and is called ftgo-order-history-by-consumer-id-and-creation-time. The index’s attributes include the primary key attributes, consumerId and orderCreationTime, and non-key attributes, including orderId and status.

该索引使 能够有效地检索按年龄递增排序的消费者订单。ftgo-order-history-by-consumer-id-and-creation-timeOrderHistoryDaoDynamoDb

The ftgo-order-history-by-consumer-id-and-creation-time index enables the OrderHistoryDaoDynamoDb to efficiently retrieve a consumer’s orders sorted by increasing age.

现在,让我们看看如何仅检索与筛选条件匹配的订单。

Let’s now look at how to retrieve only those orders that match the filter criteria.

实现 findOrderHistory 查询

查询操作具有指定搜索条件的参数。一个筛选条件是要返回的订单的最长期限。这很简单 实施,因为 DynamoDB 操作的键条件表达式支持对排序键进行范围限制。其他筛选条件对应于非键属性,并且可以实现 使用筛选表达式,这是一个布尔表达式。DynamoDB 操作仅返回满足筛选条件表达式的那些项目。例如,要查找 are ,请使用查询表达式 ,其中 是占位符参数。findOrderHistory()filterQueryQueryOrdersCANCELLEDOrderHistoryDaoDynamoDborderStatus = :orderStatus:orderStatus

The findOrderHistory() query operation has a filter parameter that specifies the search criteria. One filter criterion is the maximum age of the orders to return. This is easy to implement because the DynamoDB Query operation’s key condition expression supports a range restriction on the sort key. The other filter criteria correspond to non-key attributes and can be implemented using a filter expression, which is a Boolean expression. A DynamoDB Query operation returns only those items that satisfy the filter expression. For example, to find Orders that are CANCELLED, the OrderHistoryDaoDynamoDb uses a query expression orderStatus = :orderStatus, where :orderStatus is a placeholder parameter.

关键字筛选条件的实现更具挑战性。它选择餐厅名称或菜单项匹配的订单 指定的关键字之一。通过对餐厅名称和菜单项进行标记并将关键字集存储在集值中,可以启用关键字搜索 属性。它使用使用函数的筛选表达式(例如 ,其中 和 是指定关键字的占位符)来查找与关键字匹配的订单。OrderHistoryDaoDynamoDbkeywordscontains()contains(keywords, :keyword1) OR contains(keywords, :keyword2):keyword1:keyword2

The keyword filter criteria is more challenging to implement. It selects orders whose restaurant name or menu items match one of the specified keywords. The OrderHistoryDaoDynamoDb enables the keyword search by tokenizing the restaurant name and menu items and storing the set of keywords in a set-valued attribute called keywords. It finds the orders that match the keywords by using a filter expression that uses the contains() function, for example contains(keywords, :keyword1) OR contains(keywords, :keyword2), where :keyword1 and :keyword2 are placeholders for the specified keywords.

对查询结果进行分页

一些消费者会有大量的订单。因此,查询操作使用 pagination 是有意义的。DynamoDB 操作具有一个 operation 参数,该参数指定要返回的最大项目数。如果有更多项,则查询结果将具有 non-null 属性。DAO 可以通过调用参数设置为 的查询来检索下一页项目。findOrderHistory()QuerypageSizeLastEvaluatedKeyexclusiveStartKeyLastEvaluatedKey

Some consumers will have a large number of orders. It makes sense, therefore, for the findOrderHistory() query operation to use pagination. The DynamoDB Query operation has an operation pageSize parameter, which specifies the maximum number of items to return. If there are more items, the result of the query has a non-null LastEvaluatedKey attribute. A DAO can retrieve the next page of items by invoking the query with the exclusiveStartKey parameter set to LastEvaluatedKey.

如您所见,DynamoDB 不支持基于位置的分页。因此,将不透明的分页令牌返回给其客户端。客户端使用此分页令牌请求下一页结果。Order History Service

As you can see, DynamoDB doesn’t support position-based pagination. Consequently, Order History Service returns an opaque pagination token to its client. The client uses this pagination token to request the next page of results.

现在,我已经介绍了如何在 DynamoDB 中查询订单,让我们看看如何插入和更新订单。

Now that I’ve described how to query DynamoDB for orders, let’s look at how to insert and update them.

更新订单

DynamoDB 支持两个用于添加和更新项目的操作:和 .该操作通过其主键创建或替换整个项目。理论上,可以使用此操作来插入和更新订单。但是,使用 的一个挑战是确保正确处理对同一项的同时更新。PutItem()UpdateItem()PutItem()OrderHistoryDaoDynamoDbPutItem()

DynamoDB supports two operations for adding and updating items: PutItem() and UpdateItem(). The PutItem() operation creates or replaces an entire item by its primary key. In theory, OrderHistoryDaoDynamoDb could use this operation to insert and update orders. One challenge, however, with using PutItem() is ensuring that simultaneous updates to the same item are handled correctly.

例如,考虑两个事件处理程序同时尝试更新同一项的方案。每个事件处理程序 调用以从 DynamoDB 加载项目,在内存中更改项目,然后使用 在 DynamoDB 中更新项目。一个事件处理程序可能会覆盖另一个事件处理程序所做的更改。 可以通过使用 DynamoDB 的乐观锁定机制来防止丢失更新。但更简单、更高效的方法 是使用操作。OrderHistoryDaoDynamoDbPutItem()OrderHistoryDaoDynamoDbUpdateItem()

Consider, for example, the scenario where two event handlers simultaneously attempt to update the same item. Each event handler calls OrderHistoryDaoDynamoDb to load the item from DynamoDB, change it in memory, and update it in DynamoDB using PutItem(). One event handler could potentially overwrite the change made by the other event handler. OrderHistoryDaoDynamoDb can prevent lost updates by using DynamoDB’s optimistic locking mechanism. But an even simpler and more efficient approach is to use the UpdateItem() operation.

该操作将更新项目的各个属性,并在必要时创建项目。由于不同的事件处理程序会更新 item 的不同属性,使用 是有意义的。此操作也更有效,因为无需先从表中检索订单。UpdateItem()OrderUpdateItem

The UpdateItem() operation updates individual attributes of the item, creating the item if necessary. Since different event handlers update different attributes of the Order item, using UpdateItem makes sense. This operation is also more efficient because there’s no need to first retrieve the order from the table.

如前所述,更新数据库以响应事件的一个挑战是检测和丢弃重复项 事件。让我们看看在使用 DynamoDB 时如何做到这一点。

One challenge with updating the database in response to events is, as mentioned earlier, detecting and discarding duplicate events. Let’s look at how to do that when using DynamoDB.

检测重复事件

的所有事件处理程序都是幂等的。每个选项都设置项的一个或多个属性。 因此,可以简单地忽略重复事件的问题。但是,忽略该问题的缺点是该项有时会暂时过时。这是因为接收重复事件的事件处理程序将设置 项的属性与以前的值。在重新传递后续事件之前,该项目不会具有正确的值。Order History ServiceOrderOrder History ServiceOrderOrderOrder

All of Order History Service’s event handlers are idempotent. Each one sets one or more attributes of the Order item. Order History Service could, therefore, simply ignore the issue of duplicate events. The downside of ignoring the issue, though, is that Order item will sometimes be temporarily out-of-date. That’s because an event handler that receives a duplicate event will set an Order item’s attributes to previous values. The Order item won’t have the correct values until later events are redelivered.

如前所述,防止数据过时的一种方法是检测并丢弃重复事件。 可以通过在每个项目中记录导致其更新的事件来检测重复事件。然后,它可以使用操作的条件更新机制,仅在事件不重复时更新项。OrderHistoryDaoDynamoDbUpdateItem()

As described earlier, one way to prevent data from becoming out-of-date is to detect and discard duplicate events. OrderHistoryDaoDynamoDb can detect duplicate events by recording in each item the events that have caused it to be updated. It can then use the UpdateItem() operation’s conditional update mechanism to only update an item if an event isn’t a duplicate.

仅当条件表达式为 true 时,才会执行条件更新。条件表达式测试属性是否存在或具有特定值。DAO 可以使用名为 (其值为接收到的最高事件 ID) 的属性跟踪从每个聚合实例接收的事件。如果属性存在且其值小于 或等于事件 ID。DAO 使用以下条件表达式:OrderHistoryDaoDynamoDb«aggregateType»«aggregateId»OrderHistoryDaoDynamoDb

A conditional update is only performed if a condition expression is true. A condition expression tests whether an attribute exists or has a particular value. The OrderHistoryDaoDynamoDb DAO can track events received from each aggregate instance using an attribute called «aggregateType»«aggregateId» whose value is the highest received event ID. An event is a duplicate if the attribute exists and its value is less than or equal to the event ID. The OrderHistoryDaoDynamoDb DAO uses this condition expression:

attribute_not_exists(«aggregateType»«aggregateId»)
     OR «aggregateType»«aggregateId» < :eventId
attribute_not_exists(«aggregateType»«aggregateId»)
     OR «aggregateType»«aggregateId» < :eventId

仅当属性不存在或大于上次处理的事件 ID 时,条件表达式才允许更新。eventId

The condition expression only allows the update if the attribute doesn’t exist or the eventId is greater than the last processed event ID.

例如,假设事件处理程序收到一个事件,其 ID 来自 ID 为 的聚合。跟踪属性的名称为 。如果此属性的值大于或等于 ,则事件处理程序应将事件视为重复事件。事件处理程序调用的操作使用以下条件表达式更新项目:DeliveryPickup123323-343434Delivery3949384394-039434903Delivery3949384394-039434903123323-343434query()Order

For example, suppose an event handler receives a DeliveryPickup event whose ID is 123323-343434 from a Delivery aggregate whose ID is 3949384394-039434903. The name of the tracking attribute is Delivery3949384394-039434903. The event handler should consider the event to be a duplicate if the value of this attribute is greater than or equal to 123323-343434. The query() operation invoked by the event handler updates the Order item using this condition expression:

attribute_not_exists(Delivery3949384394-039434903)
     OR Delivery3949384394-039434903 < :eventId
attribute_not_exists(Delivery3949384394-039434903)
     OR Delivery3949384394-039434903 < :eventId

现在,我已经介绍了 DynamoDB 数据模型和查询设计,让我们看一下 ,它定义了更新和查询表的方法。OrderHistoryDaoDynamoDbftgo-order-history

Now that I’ve described the DynamoDB data model and query design, let’s take a look at OrderHistoryDaoDynamoDb, which defines the methods that update and query the ftgo-order-history table.

7.4.3. OrderHistoryDaoDynamoDb 类

7.4.3. The OrderHistoryDaoDynamoDb class

该类实现读取和写入表中项目的方法。它的更新方法由 调用,其查询方法由 调用。让我们看一些示例方法,从方法开始。OrderHistoryDaoDynamoDbftgo-order-historyOrderHistoryEventHandlersOrderHistoryQuery APIaddOrder()

The OrderHistoryDaoDynamoDb class implements methods that read and write items in the ftgo-order-history table. Its update methods are invoked by OrderHistoryEventHandlers, and its query methods are invoked by OrderHistoryQuery API. Let’s take a look at some example methods, starting with the addOrder() method.

addOrder() 方法

该方法(如清单 7.2 所示)向表中添加了一个 order。它有两个参数: 和 .参数是从事件中获取的 to add。该参数包含发出事件的聚合的 type 和 ID。它用于实现条件更新。addOrder()ftgo-order-historyordersourceEventorderOrderOrderCreatedsourceEventeventId

The addOrder() method, which is shown in listing 7.2, adds an order to the ftgo-order-history table. It has two parameters: order and sourceEvent. The order parameter is the Order to add, which is obtained from the OrderCreated event. The sourceEvent parameter contains the eventId and the type and ID of the aggregate that emitted the event. It’s used to implement the conditional update.

清单 7.2.该方法添加或更新addOrder()Order
public class OrderHistoryDaoDynamoDb ...

@Override
public boolean addOrder(Order order, Optional<SourceEvent> eventSource) {
 UpdateItemSpec spec = new UpdateItemSpec()
         .withPrimaryKey("orderId", order.getOrderId())                      1
         .withUpdateExpression("SET orderStatus = :orderStatus, " +          2
                  "creationDate = :cd, consumerId = :consumerId, lineItems =" +
                 " :lineItems, keywords = :keywords, restaurantName = " +
                 ":restaurantName")
         .withValueMap(new Maps()                                            3
                  .add(":orderStatus", order.getStatus().toString())
                 .add(":cd", order.getCreationDate().getMillis())
                 .add(":consumerId", order.getConsumerId())
                 .add(":lineItems", mapLineItems(order.getLineItems()))
                 .add(":keywords", mapKeywords(order))
                 .add(":restaurantName", order.getRestaurantName())
                 .map())
         .withReturnValues(ReturnValue.NONE);
 return idempotentUpdate(spec, eventSource);
}
public class OrderHistoryDaoDynamoDb ...

@Override
public boolean addOrder(Order order, Optional<SourceEvent> eventSource) {
 UpdateItemSpec spec = new UpdateItemSpec()
         .withPrimaryKey("orderId", order.getOrderId())                      1
         .withUpdateExpression("SET orderStatus = :orderStatus, " +          2
                  "creationDate = :cd, consumerId = :consumerId, lineItems =" +
                 " :lineItems, keywords = :keywords, restaurantName = " +
                 ":restaurantName")
         .withValueMap(new Maps()                                            3
                  .add(":orderStatus", order.getStatus().toString())
                 .add(":cd", order.getCreationDate().getMillis())
                 .add(":consumerId", order.getConsumerId())
                 .add(":lineItems", mapLineItems(order.getLineItems()))
                 .add(":keywords", mapKeywords(order))
                 .add(":restaurantName", order.getRestaurantName())
                 .map())
         .withReturnValues(ReturnValue.NONE);
 return idempotentUpdate(spec, eventSource);
}

  • 1 要更新的 Order 项目的主键
  • 1 The primary key of the Order item to update
  • 2 更新属性的 update 表达式
  • 2 The update expression that updates the attributes
  • 3 更新表达式中占位符的值
  • 3 The values of the placeholders in the update expression

该方法创建一个 ,它是 AWS 开发工具包的一部分,用于描述更新操作。创建 后,它会调用 ,一个帮助程序方法,该方法在添加防止重复更新的条件表达式后执行更新。addOrder()UpdateSpecUpdateSpecidempotentUpdate()

The addOrder() method creates an UpdateSpec, which is part of the AWS SDK and describes the update operation. After creating the UpdateSpec, it calls idempotentUpdate(), a helper method that performs the update after adding a condition expression that guards against duplicate updates.

notePickedUp() 方法

该方法(如清单 7.3 所示)由事件的事件处理程序调用。它将项的 更改为 。notePickedUp()DeliveryPickedUpdeliveryStatusOrderPICKED_UP

The notePickedUp() method, shown in listing 7.3, is called by the event handler for the DeliveryPickedUp event. It changes the deliveryStatus of the Order item to PICKED_UP.

清单 7.3.该方法将订单状态更改为notePickedUp()PICKED_UP
public class OrderHistoryDaoDynamoDb ...

@Override
public void notePickedUp(String orderId, Optional<SourceEvent> eventSource) {
 UpdateItemSpec spec = new UpdateItemSpec()
         .withPrimaryKey("orderId", orderId)
         .withUpdateExpression("SET #deliveryStatus = :deliveryStatus")
         .withNameMap(Collections.singletonMap("#deliveryStatus",
                 DELIVERY_STATUS_FIELD))
         .withValueMap(Collections.singletonMap(":deliveryStatus",
                 DeliveryStatus.PICKED_UP.toString()))
         .withReturnValues(ReturnValue.NONE);
 idempotentUpdate(spec, eventSource);
}
public class OrderHistoryDaoDynamoDb ...

@Override
public void notePickedUp(String orderId, Optional<SourceEvent> eventSource) {
 UpdateItemSpec spec = new UpdateItemSpec()
         .withPrimaryKey("orderId", orderId)
         .withUpdateExpression("SET #deliveryStatus = :deliveryStatus")
         .withNameMap(Collections.singletonMap("#deliveryStatus",
                 DELIVERY_STATUS_FIELD))
         .withValueMap(Collections.singletonMap(":deliveryStatus",
                 DeliveryStatus.PICKED_UP.toString()))
         .withReturnValues(ReturnValue.NONE);
 idempotentUpdate(spec, eventSource);
}

此方法类似于 。它创建一个并调用 .让我们看看方法。addOrder()UpdateItemSpecidempotentUpdate()idempotentUpdate()

This method is similar to addOrder(). It creates an UpdateItemSpec and invokes idempotentUpdate(). Let’s look at the idempotentUpdate() method.

idempotentUpdate() 方法

下面的清单显示了该方法,该方法可能在向 防止重复更新的条件表达式添加 item 后更新项目。idempotentUpdate()UpdateItemSpec

The following listing shows the idempotentUpdate() method, which updates the item after possibly adding a condition expression to the UpdateItemSpec that guards against duplicate updates.

清单 7.4.该方法忽略重复事件idempotentUpdate()
public class OrderHistoryDaoDynamoDb ...

private boolean idempotentUpdate(UpdateItemSpec spec, Optional<SourceEvent>
        eventSource) {
 try {
  table.updateItem(eventSource.map(es -> es.addDuplicateDetection(spec))
          .orElse(spec));
  return true;
 } catch (ConditionalCheckFailedException e) {
  // Do nothing
  return false;
 }
}
public class OrderHistoryDaoDynamoDb ...

private boolean idempotentUpdate(UpdateItemSpec spec, Optional<SourceEvent>
        eventSource) {
 try {
  table.updateItem(eventSource.map(es -> es.addDuplicateDetection(spec))
          .orElse(spec));
  return true;
 } catch (ConditionalCheckFailedException e) {
  // Do nothing
  return false;
 }
}

如果提供了 ,则调用 add 到前面描述的条件表达式。该方法捕获并忽略 ,如果事件是重复项,则会引发 。sourceEventidempotentUpdate()SourceEvent.addDuplicateDetection()UpdateItemSpecidempotentUpdate()ConditionalCheckFailedExceptionupdateItem()

If the sourceEvent is supplied, idempotentUpdate() invokes SourceEvent.addDuplicateDetection() to add to UpdateItemSpec the condition expression that was described earlier. The idempotentUpdate() method catches and ignores the ConditionalCheckFailedException, which is thrown by updateItem() if the event was a duplicate.

现在我们已经看到了更新表的代码,让我们看看 query 方法。

Now that we’ve seen the code that updates the table, let’s look at the query method.

findOrderHistory() 方法

该方法如清单 7.5 所示,通过使用二级索引查询表来检索消费者的订单。它有两个参数:指定使用者和指定搜索条件。此方法从其参数创建 (与 一样,它是 AWS 开发工具包的一部分) 查询索引,并将返回的项目转换为对象。findOrderHistory()ftgo-order-historyftgo-order-history-by-consumer-id-and-creation-timeconsumerIdfilterQuerySpecUpdateSpecOrderHistory

The findOrderHistory() method, shown in listing 7.5, retrieves the consumer’s orders by querying the ftgo-order-history table using the ftgo-order-history-by-consumer-id-and-creation-time secondary index. It has two parameters: consumerId specifies the consumer, and filter specifies the search criteria. This method creates QuerySpec—which, like UpdateSpec, is part of the AWS SDK—from its parameters, queries the index, and transforms the returned items into an OrderHistory object.

清单 7.5.该方法检索消费者的匹配订单findOrderHistory()
public class OrderHistoryDaoDynamoDb ...

@Override
public OrderHistory findOrderHistory(String consumerId, OrderHistoryFilter
        filter) {

 QuerySpec spec = new QuerySpec()
         .withScanIndexForward(false)                                    1
          .withHashKey("consumerId", consumerId)
         .withRangeKeyCondition(new RangeKeyCondition("creationDate")    2
                                  .gt(filter.getSince().getMillis()));

 filter.getStartKeyToken().ifPresent(token ->
       spec.withExclusiveStartKey(toStartingPrimaryKey(token)));

 Map<String, Object> valuesMap = new HashMap<>();

 String filterExpression = Expressions.and(                              3
          keywordFilterExpression(valuesMap, filter.getKeywords()),
         statusFilterExpression(valuesMap, filter.getStatus()));

 if (!valuesMap.isEmpty())
  spec.withValueMap(valuesMap);

 if (StringUtils.isNotBlank(filterExpression)) {
  spec.withFilterExpression(filterExpression);
 }

 filter.getPageSize().ifPresent(spec::withMaxResultSize);                4

 ItemCollection<QueryOutcome> result = index.query(spec);

 return new OrderHistory(
         StreamSupport.stream(result.spliterator(), false)
            .map(this::toOrder)                                          5
             .collect(toList()),
         Optional.ofNullable(result
               .getLastLowLevelResult()
               .getQueryResult().getLastEvaluatedKey())
            .map(this::toStartKeyToken));
}
public class OrderHistoryDaoDynamoDb ...

@Override
public OrderHistory findOrderHistory(String consumerId, OrderHistoryFilter
        filter) {

 QuerySpec spec = new QuerySpec()
         .withScanIndexForward(false)                                    1
          .withHashKey("consumerId", consumerId)
         .withRangeKeyCondition(new RangeKeyCondition("creationDate")    2
                                  .gt(filter.getSince().getMillis()));

 filter.getStartKeyToken().ifPresent(token ->
       spec.withExclusiveStartKey(toStartingPrimaryKey(token)));

 Map<String, Object> valuesMap = new HashMap<>();

 String filterExpression = Expressions.and(                              3
          keywordFilterExpression(valuesMap, filter.getKeywords()),
         statusFilterExpression(valuesMap, filter.getStatus()));

 if (!valuesMap.isEmpty())
  spec.withValueMap(valuesMap);

 if (StringUtils.isNotBlank(filterExpression)) {
  spec.withFilterExpression(filterExpression);
 }

 filter.getPageSize().ifPresent(spec::withMaxResultSize);                4

 ItemCollection<QueryOutcome> result = index.query(spec);

 return new OrderHistory(
         StreamSupport.stream(result.spliterator(), false)
            .map(this::toOrder)                                          5
             .collect(toList()),
         Optional.ofNullable(result
               .getLastLowLevelResult()
               .getQueryResult().getLastEvaluatedKey())
            .map(this::toStartKeyToken));
}

  • 1 指定 query 必须按年龄递增的顺序返回订单
  • 1 Specifies that query must return the orders in order of increasing age
  • 2 要退货的订单的最长期限
  • 2 The maximum age of the orders to return
  • 3 从 OrderHistoryFilter 构造筛选表达式和占位符值映射。
  • 3 Construct a filter expression and placeholder value map from the OrderHistoryFilter.
  • 4 如果调用方已指定页面大小,则限制结果数。
  • 4 Limit the number of results if the caller has specified a page size.
  • 5 根据查询返回的项目创建 Order。
  • 5 Create an Order from an item returned by the query.

构建 后,此方法将执行查询并构建一个 ,其中包含返回项中的 , 列表。QuerySpecOrderHistoryOrders

After building a QuerySpec, this method then executes a query and builds an OrderHistory, which contains the list of Orders, from the returned items.

该方法通过将 返回的值序列化为 JSON 令牌来实现分页。如果客户端在 中指定了启动令牌,则将其序列化并调用以设置启动键。findOrderHistory()getLastEvaluatedKey()OrderHistoryFilterfindOrderHistory()withExclusiveStartKey()

The findOrderHistory() method implements pagination by serializing the value returned by getLastEvaluatedKey() into a JSON token. If a client specifies a start token in OrderHistoryFilter, then findOrderHistory() serializes it and invokes withExclusiveStartKey() to set the start key.

如您所见,在实现 CQRS 视图时,您必须解决许多问题,包括选择数据库、设计 数据模型,用于高效实现更新和查询、处理并发更新以及处理重复事件。 代码中唯一复杂的部分是 DAO,因为它必须正确处理并发并确保更新是幂等的。

As you can see, you must address numerous issues when implementing a CQRS view, including picking a database, designing the data model that efficiently implements updates and queries, handling concurrent updates, and dealing with duplicate events. The only complex part of the code is the DAO, because it must properly handle concurrency and ensure that updates are idempotent.

总结

Summary

  • 实现从多个服务检索数据的查询具有挑战性,因为每个服务的数据都是私有的。
  • Implementing queries that retrieve data from multiple services is challenging because each service’s data is private.
  • 有两种方法可以实现这些类型的查询:API 组合模式和命令查询责任分离 (CQRS) 模式。
  • There are two ways to implement these kinds of query: the API composition pattern and the Command query responsibility segregation (CQRS) pattern.
  • API 组合模式(从多个服务收集数据)是实现查询的最简单方法,应该 尽可能使用。
  • The API composition pattern, which gathers data from multiple services, is the simplest way to implement queries and should be used whenever possible.
  • API 组合模式的一个限制是,一些复杂的查询需要大型数据集的低效内存中联接。
  • A limitation of the API composition pattern is that some complex queries require inefficient in-memory joins of large datasets.
  • CQRS 模式使用视图数据库实现查询,功能更强大,但实现起来更复杂。
  • The CQRS pattern, which implements queries using view databases, is more powerful but more complex to implement.
  • CQRS 视图模块必须处理并发更新以及检测和丢弃重复事件。
  • A CQRS view module must handle concurrent updates as well as detect and discard duplicate events.
  • CQRS 使服务能够实现返回不同服务拥有的数据的查询,从而改进关注点分离。
  • CQRS improves separation of concerns by enabling a service to implement a query that returns data owned by a different service.
  • 客户端必须处理 CQRS 视图的最终一致性。
  • Clients must handle the eventual consistency of CQRS views.

第 8 章.外部 API 模式

Chapter 8. External API patterns

本章涵盖

This chapter covers

  • 设计支持不同客户端的 API 的挑战
  • The challenge of designing APIs that support a diverse set of clients
  • 将 API 网关和后端应用于前端模式
  • Applying API gateway and Backends for frontends patterns
  • 设计和实施 API 网关
  • Designing and implementing an API gateway
  • 使用反应式编程简化 API 组合
  • Using reactive programming to simplify API composition
  • 使用 GraphQL 实施 API 网关
  • Implementing an API gateway using GraphQL

与许多其他应用程序一样,FTGO 应用程序也具有 REST API。其客户端包括 FTGO 移动应用程序、JavaScript 在浏览器中运行,以及合作伙伴开发的应用程序。在这种整体式架构中,公开的 API to clients 是 Monolith 的 API。但是,一旦 FTGO 团队开始部署微服务,就不再只有一个 API,因为 每个服务都有自己的 API。Mary 和她的团队必须决定 FTGO 应用程序现在应该向其 客户。例如,客户端是否应该知道服务的存在并直接向它们发出请求?

The FTGO application, like many other applications, has a REST API. Its clients include the FTGO mobile applications, JavaScript running in the browser, and applications developed by partners. In such a monolithic architecture, the API that’s exposed to clients is the monolith’s API. But when once the FTGO team starts deploying microservices, there’s no longer one API, because each service has its own API. Mary and her team must decide what kind of API the FTGO application should now expose to its clients. For example, should clients be aware of the existence of services and make requests to them directly?

由于客户端的多样性,设计应用程序的外部 API 的任务变得更加具有挑战性。不同 客户端通常需要不同的数据。基于桌面浏览器的 UI 通常比移动应用程序显示更多的信息。 此外,不同的客户端通过不同类型的网络访问服务。防火墙内的客户端使用高性能 LAN 和防火墙外的客户端使用 Internet 或移动网络,这将具有较低的性能。因此 正如您将了解到的那样,拥有一个单一的、一刀切的 API 通常没有意义。

The task of designing an application’s external API is made even more challenging by the diversity of its clients. Different clients typically require different data. A desktop browser-based UI usually displays far more information than a mobile application. Also, different clients access the services over different kinds of networks. The clients within the firewall use a high-performance LAN, and the clients outside of the firewall use the internet or mobile network, which will have lower performance. Consequently, as you’ll learn, it often doesn’t make sense to have a single, one-size-fits-all API.

本章首先介绍各种外部 API 设计问题。然后,我将介绍外部 API 模式。我覆盖了 API 网关模式,然后是前端的后端模式。之后,我将讨论如何设计和实现 API 网关。我回顾了可用的各种选项,其中包括现成的 API 网关产品和框架 用于开发您自己的。我将介绍使用 Spring Cloud 网关构建的 API 网关的设计和实现 框架。我还介绍了如何使用 GraphQL 构建 API 网关,GraphQL 是一个提供基于图形的查询语言的框架。

This chapter begins by describing various external API design issues. I then describe the external API patterns. I cover the API gateway pattern and then the Backends for frontends pattern. After that, I discuss how to design and implement an API gateway. I review the various options that are available, which include off-the-shelf API gateway products and frameworks for developing your own. I describe the design and implementation of an API gateway that’s built using the Spring Cloud Gateway framework. I also describe how to build an API gateway using GraphQL, a framework that provides graph-based query language.

8.1. 外部 API 设计问题

8.1. External API design issues

为了探索各种与 API 相关的问题,让我们考虑一下 FTGO 应用程序。如图 8.1 所示,此应用程序的服务由各种客户端使用。四种类型的客户端使用服务的 API:

In order to explore the various API-related issues, let’s consider the FTGO application. As figure 8.1 shows, this application’s services are consumed by a variety of clients. Four kinds of clients consume the services’ APIs:

  • Web 应用程序,例如,为消费者实现基于浏览器的 UI,为餐厅实现基于浏览器的 UI,以及实现内部管理员 UI 的Consumer web applicationRestaurant web applicationAdmin web application
  • Web applications, such as Consumer web application, which implements the browser-based UI for consumers, Restaurant web application, which implements the browser-based UI for restaurants, and Admin web application, which implements the internal administrator UI
  • 在浏览器中运行的 JavaScript 应用程序
  • JavaScript applications running in the browser
  • 移动应用程序,一个面向消费者,另一个面向快递员
  • Mobile applications, one for consumers and the other for couriers
  • 由第三方开发人员编写的应用程序
  • Applications written by third-party developers
图 8.1.FTGO 应用程序的服务及其客户端。有几种不同类型的客户端。有些在防火墙内, 其他人在外面。防火墙外的用户通过性能较低的 Internet/移动网络访问服务。 防火墙内的那些客户端使用更高性能的 LAN。

Web 应用程序在防火墙内运行,因此它们通过高带宽、低延迟的 LAN 访问服务。另一个 客户端在防火墙之外运行,因此它们通过较低带宽、较高延迟的 Internet 或移动设备访问服务 网络。

The web applications run inside the firewall, so they access the services over a high-bandwidth, low-latency LAN. The other clients run outside the firewall, so they access the services over the lower-bandwidth, higher-latency internet or mobile network.

API 设计的一种方法是让客户端直接调用服务。从表面上看,这听起来很简单——之后 all,这就是客户端调用整体式应用程序的 API 的方式。但这种方法在微服务架构中很少使用 由于以下缺点:

One approach to API design is for clients to invoke the services directly. On the surface, this sounds quite straightforward—after all, that’s how clients invoke the API of a monolithic application. But this approach is rarely used in a microservice architecture because of the following drawbacks:

  • 细粒度服务 API 要求客户端发出多个请求来检索他们需要的数据,这效率很低 并可能导致糟糕的用户体验。
  • The fine-grained service APIs require clients to make multiple requests to retrieve the data they need, which is inefficient and can result in a poor user experience.
  • 由于客户端了解每个服务及其 API 而导致缺乏封装,因此很难更改架构 和 API 的 API 进行验证。
  • The lack of encapsulation caused by clients knowing about each service and its API makes it difficult to change the architecture and the APIs.
  • 服务可能使用不方便或不实用的 IPC 机制,尤其是那些外部的客户端 防火墙。
  • Services might use IPC mechanisms that aren’t convenient or practical for clients to use, especially those clients outside the firewall.

要详细了解这些缺点,让我们看一下面向消费者的 FTGO 移动应用程序如何从 服务。

To learn more about these drawbacks, let’s take a look at how the FTGO mobile application for consumers retrieves data from the services.

8.1.1. FTGO 移动客户端的 API 设计问题

8.1.1. API design issues for the FTGO mobile client

消费者使用 FTGO 移动客户端下达和管理他们的订单。假设您正在开发移动客户端的视图,该视图显示一个订单。如第 7 章所述,此视图显示的信息包括基本订单信息,包括其状态、付款状态、 餐厅视角下的订单和配送状态,包括其位置和预计配送时间(如果位于 通过。View Order

Consumers use the FTGO mobile client to place and manage their orders. Imagine you’re developing the mobile client’s View Order view, which displays an order. As described in chapter 7, the information displayed by this view includes basic order information, including its status, payment status, status of the order from the restaurant’s perspective, and delivery status, including its location and estimated delivery time if in transit.

FTGO 应用程序的整体版本具有返回订单详细信息的 API 终端节点。移动客户端检索 它需要的信息。相比之下,在 FTGO 应用程序的微服务版本中, 如前所述,订单详细信息分散在多个服务中,包括:

The monolithic version of the FTGO application has an API endpoint that returns the order details. The mobile client retrieves the information it needs by making a single request. In contrast, in the microservices version of the FTGO application, the order details are, as described previously, scattered across several services, including the following:

  • 订单服务基本订单信息,包括详细信息和状态
  • Order ServiceBasic order information, including the details and status
  • 厨房服务从餐厅的角度来看订单状态以及预计可以取餐的时间
  • Kitchen ServiceThe status of the order from the restaurant’s perspective and the estimated time it will be ready for pickup
  • 送货服务订单的配送状态、预计送达时间和当前位置
  • Delivery ServiceThe order’s delivery status, its estimated delivery time, and its current location
  • 会计服务订单的付款状态
  • Accounting ServiceThe order’s payment status

如果移动客户端直接调用服务,那么它必须进行多次调用,如图 8.2 所示,才能检索这些数据。

If the mobile client invokes the services directly, then it must, as figure 8.2 shows, make multiple calls to retrieve this data.

图 8.2.客户端可以通过单个请求从整体式 FTGO 应用程序中检索订单详细信息。但客户必须使 在微服务架构中检索相同信息的多个请求。

在此设计中,移动应用程序扮演着 API 编辑器的角色。它调用多个服务,并将 结果。尽管这种方法看起来很合理,但它存在几个严重的问题。

In this design, the mobile application is playing the role of API composer. It invokes multiple services and combines the results. Although this approach seems reasonable, it has several serious problems.

由于客户端发出多个请求,用户体验不佳

第一个问题是移动应用程序有时必须发出多个请求才能检索它想要显示的数据 给用户。应用程序与服务之间的闲聊交互可能会使应用程序看起来无响应,尤其是 当它使用 Internet 或移动网络时。Internet 的带宽和延迟比 LAN 低得多,而移动设备 网络甚至更糟糕。移动网络(和 Internet)的延迟通常比 LAN 大 100 倍。

The first problem is that the mobile application must sometimes make multiple requests to retrieve the data it wants to display to the user. The chatty interaction between the application and the services can make the application seem unresponsive, especially when it uses the internet or a mobile network. The internet has much lower bandwidth and higher latency than a LAN, and mobile networks are even worse. The latency of a mobile network (and internet) is typically 100x greater than a LAN.

在检索订单详细信息时,较高的延迟可能不是问题,因为移动应用程序会最大限度地减少 delay 来执行请求。整体响应时间不大于单个请求的响应时间。但在 其他场景下,客户端可能需要按顺序执行请求,这会导致用户体验不佳。

The higher latency might not be a problem when retrieving the order details, because the mobile application minimizes the delay by executing the requests concurrently. The overall response time is no greater than that of a single request. But in other scenarios, a client may need to execute requests sequentially, which will result in a poor user experience.

更重要的是,由于网络延迟而导致的用户体验不佳并不是聊天 API 的唯一问题。它需要移动开发人员 编写可能复杂的 API 组合代码。这项工作分散了他们创造伟大 用户体验。此外,由于每个网络请求都会消耗电量,因此聊天的 API 会更快地耗尽移动设备的电池。

What’s more, poor user experience due to network latency is not the only issue with a chatty API. It requires the mobile developer to write potentially complex API composition code. This work is a distraction from their primary task of creating a great user experience. Also, because each network request consumes power, a chatty API drains the mobile device’s battery faster.

缺乏封装要求前端开发人员与后端同步更改代码

移动应用程序直接访问服务的另一个缺点是缺少封装。随着应用程序的发展, 服务的开发人员有时会以破坏现有客户端的方式更改 API。他们甚至可能会改变 system 被分解为 services。开发人员可以添加新服务并拆分或合并现有服务。但是,如果知识 ,则很难更改服务的 API。

Another drawback of a mobile application directly accessing the services is the lack of encapsulation. As an application evolves, the developers of a service sometimes change an API in a way that breaks existing clients. They might even change how the system is decomposed into services. Developers may add new services and split or merge existing services. But if knowledge about the services is baked into a mobile application, it can be difficult to change the services’ APIs.

与更新服务器端应用程序不同,推出新版本的移动设备需要数小时甚至数天的时间 应用。Apple 或 Google 必须批准升级并使其可供下载。用户可能无法下载升级 立即 — 如果有的话。而且您可能不想强迫不情愿的用户升级。向移动设备公开服务 API 的策略 对这些 API 的发展造成了重大障碍。

Unlike when updating a server-side application, it takes hours or perhaps even days to roll out a new version of a mobile application. Apple or Google must approve the upgrade and make it available for download. Users might not download the upgrade immediately—if ever. And you may not want to force reluctant users to upgrade. The strategy of exposing service APIs to mobile creates a significant obstacle to evolving those APIs.

服务可能会使用对客户端不友好的 IPC 机制

移动应用程序直接调用服务的另一个挑战是,某些服务可能会使用非 很容易被客户消费。在防火墙外部运行的客户端应用程序通常使用 HTTP 和 WebSockets 等协议。 但如第 3 章所述,服务开发人员有许多协议可供选择,而不仅仅是 HTTP。某些应用程序的服务可能使用 gRPC,而 其他 API 可以使用 AMQP 消息收发协议。这些类型的协议在内部运行良好,但可能不容易被移动客户端使用。有些甚至对防火墙不友好。

Another challenge with a mobile application directly calling services is that some services could use protocols that aren’t easily consumed by a client. Client applications that run outside the firewall typically use protocols such as HTTP and WebSockets. But as described in chapter 3, service developers have many protocols to choose from—not just HTTP. Some of an application’s services might use gRPC, whereas others could use the AMQP messaging protocol. These kinds of protocols work well internally, but might not be easily consumed by a mobile client. Some aren’t even firewall friendly.

8.1.2. 其他类型客户端的 API 设计问题

8.1.2. API design issues for other kinds of clients

我选择了移动客户端,因为它是演示客户端直接访问服务的缺点的好方法。但 向客户端公开服务所带来的问题并不仅限于移动客户端。其他类型的客户,尤其是 防火墙之外的那些,也会遇到这些问题。如前所述,FTGO 应用程序的服务被消耗 通过 Web 应用程序、基于浏览器的 JavaScript 应用程序和第三方应用程序。我们来看看 API 设计 这些客户端的问题。

I picked the mobile client because it’s a great way to demonstrate the drawbacks of clients accessing services directly. But the problems created by exposing services to clients aren’t specific to just mobile clients. Other kinds of clients, especially those outside the firewall, also encounter these problems. As described earlier, the FTGO application’s services are consumed by web applications, browser-based JavaScript applications, and third-party applications. Let’s take a look at the API design issues with these clients.

Web 应用程序的 API 设计问题

传统的服务器端 Web 应用程序(处理来自浏览器的 HTTP 请求并返回 HTML 页面)在防火墙内运行 并通过 LAN 访问服务。网络带宽和延迟并不是在 Web 中实现 API 组合的障碍 应用。此外,Web 应用程序可以使用非 Web 友好协议来访问服务。开发 Web 的团队 应用程序属于同一个组织,并且通常与编写后端服务的团队密切合作。 因此,每当后端服务发生变化时,都可以轻松更新 Web 应用程序。因此,Web 是可行的 应用程序直接访问后端服务。

Traditional server-side web applications, which handle HTTP requests from browsers and return HTML pages, run within the firewall and access the services over a LAN. Network bandwidth and latency aren’t obstacles to implementing API composition in a web application. Also, web applications can use non-web-friendly protocols to access the services. The teams that develop web applications are part of the same organization and often work in close collaboration with the teams writing the backend services, so a web application can easily be updated whenever the backend services are changed. Consequently, it’s feasible for a web application to access the backend services directly.

基于浏览器的 JavaScript 应用程序的 API 设计问题

现代浏览器应用程序使用一定数量的 JavaScript。即使 HTML 主要由服务器端 Web 应用程序生成, 在浏览器中运行的 JavaScript 调用服务是很常见的。例如,所有 FTGO 应用程序 Web 应用程序—, , 和 —都包含调用后端服务的 JavaScript。例如,Web 应用程序使用调用服务 API 的 JavaScript 动态刷新页面。ConsumerRestaurantAdminConsumerOrder Details

Modern browser applications use some amount of JavaScript. Even if the HTML is primarily generated by a server-side web application, it’s common for JavaScript running in the browser to invoke services. For example, all of the FTGO application web applications—Consumer, Restaurant, and Admin—contain JavaScript that invokes the backend services. The Consumer web application, for instance, dynamically refreshes the Order Details page using JavaScript that invokes the service APIs.

一方面,当服务 API 发生变化时,基于浏览器的 JavaScript 应用程序很容易更新。另一方面,JavaScript 通过 Internet 访问服务的应用程序与移动应用程序存在相同的网络延迟问题。 更糟糕的是,基于浏览器的 UI,尤其是桌面的 UI,通常更复杂,需要组合 服务多于移动应用程序。通过 Internet 访问服务的 and 应用程序可能无法有效地编写服务 API。ConsumerRestaurant

On one hand, browser-based JavaScript applications are easy to update when service APIs change. On the other hand, JavaScript applications that access the services over the internet have the same problems with network latency as mobile applications. To make matters worse, browser-based UIs, especially those for the desktop, are usually more sophisticated and need to compose more services than mobile applications. It’s likely that the Consumer and Restaurant applications, which access services over the internet, won’t be able to compose service APIs efficiently.

为第三方应用程序设计 API

与许多其他组织一样,FTGO 向第三方开发人员公开 API。开发人员可以使用 FTGO API 编写 下订单和管理订单的应用程序。这些第三方应用程序通过 Internet 访问 API,因此 API 组合 可能效率低下。但是,与更大的 API 相比,API 组合效率低下是一个相对较小的问题 设计第三方应用程序使用的 API 的挑战。这是因为第三方开发人员需要一个稳定的 API。

FTGO, like many other organizations, exposes an API to third-party developers. The developers can use the FTGO API to write applications that place and manage orders. These third-party applications access the APIs over the internet, so API composition is likely to be inefficient. But the inefficiency of API composition is a relatively minor problem compared to the much larger challenge of designing an API that’s used by third-party applications. That’s because third-party developers need an API that’s stable.

很少有组织可以强制第三方开发人员升级到新的 API。存在不稳定 API 风险的组织 将开发人员流失给竞争对手。因此,您必须仔细管理第三方使用的 API 的演变 开发 人员。您通常必须长时间维护旧版本 — 可能永远维护。

Very few organizations can force third-party developers to upgrade to a new API. Organizations that have an unstable API risk losing developers to a competitor. Consequently, you must carefully manage the evolution of an API that’s used by third-party developers. You typically have to maintain older versions for a long time—possibly forever.

这个要求对组织来说是一个巨大的负担。让后端服务的开发人员负责是不切实际的 以保持长期的向后兼容性。组织不是直接向第三方开发人员公开服务,而是 应该有一个单独的公共 API,由单独的团队开发。正如您稍后将了解的那样,公共 API 已实现 通过称为 API 网关的架构组件。让我们看看 API 网关的工作原理。

This requirement is a huge burden for an organization. It’s impractical to make the developers of the backend services responsible for maintaining long-term backward compatibility. Rather than expose services directly to third-party developers, organizations should have a separate public API that’s developed by a separate team. As you’ll learn later, the public API is implemented by an architectural component known as an API gateway. Let’s look at how an API gateway works.

8.2. API 网关模式

8.2. The API gateway pattern

正如您刚才所看到的,直接访问服务的服务存在许多缺点。这通常不切实际 一个客户端,用于通过 Internet 执行 API 组合。缺少封装使开发人员难以更改 服务分解和 API。服务有时会使用不适合在防火墙之外使用的通信协议。 因此,更好的方法是使用 API 网关。

As you’ve just seen, there are numerous drawbacks with services accessing services directly. It’s often not practical for a client to perform API composition over the internet. The lack of encapsulation makes it difficult for developers to change service decomposition and APIs. Services sometimes use communication protocols that aren’t suitable outside the firewall. Consequently, a much better approach is to use an API gateway.

模式:API 网关

实施一项服务,该服务是从外部 API 客户端进入基于微服务的应用程序的入口点。请参阅 http://microservices.io/patterns/apigateway.html

Implement a service that’s the entry point into the microservices-based application from external API clients. See http://microservices.io/patterns/apigateway.html.

API 网关是一种服务,它是从外部世界进入应用程序的入口点。它负责请求路由、API 组合和其他功能,例如身份验证。本节介绍 API 网关模式。我讨论了它的好处 和缺点,并描述在开发 API 网关时必须解决的各种设计问题。

An API gateway is a service that’s the entry point into the application from the outside world. It’s responsible for request routing, API composition, and other functions, such as authentication. This section covers the API gateway pattern. I discuss its benefits and drawbacks and describe various design issues you must address when developing an API gateway.

8.2.1. API 网关模式概述

8.2.1. Overview of the API gateway pattern

第 8.1.1 节描述了客户端(例如 FTGO 移动应用程序)发出多个请求以显示信息的缺点 给用户。更好的方法是让客户端向 API 网关发出单个请求,该服务充当 从防火墙外部向应用程序发出 API 请求的单一入口点。它类似于 Facade 模式的 面向对象的设计。与外观一样,API 网关封装了应用程序的内部架构,并提供了一个 API 的客户端。它还可能具有其他职责,例如身份验证、监控和速率限制。图 8.3 显示了客户端、API 网关和服务之间的关系。

Section 8.1.1 described the drawbacks of clients, such as the FTGO mobile application, making multiple requests in order to display information to the user. A much better approach is for a client to make a single request to an API gateway, a service that serves as the single entry point for API requests into an application from outside the firewall. It’s similar to the Facade pattern from object-oriented design. Like a facade, an API gateway encapsulates the application’s internal architecture and provides an API to its clients. It may also have other responsibilities, such as authentication, monitoring, and rate limiting. Figure 8.3 shows the relationship between the clients, the API gateway, and the services.

图 8.3.API 网关是从防火墙外部进行 API 调用的应用程序的单一入口点。

API 网关负责请求路由、API 组合和协议转换。来自外部客户端的所有 API 请求首先转到 API 网关,该网关将一些请求路由到相应的服务。API 网关处理其他 请求,以及通过调用多个服务并聚合结果。它也可以翻译 在客户端友好协议(如 HTTP 和 WebSockets)与服务使用的客户端不友好协议之间。

The API gateway is responsible for request routing, API composition, and protocol translation. All API requests from external clients first go to the API gateway, which routes some requests to the appropriate service. The API gateway handles other requests using the API composition pattern and by invoking multiple services and aggregating the results. It may also translate between client-friendly protocols such as HTTP and WebSockets and client-unfriendly protocols used by the services.

请求路由

API 网关的关键功能之一是请求路由。API 网关通过将请求路由到相应的服务来实现一些 API 操作。当它收到请求时, API 网关查询路由映射,该映射指定要将请求路由到哪个服务。例如,路由映射可能 将 HTTP 方法和路径映射到服务的 HTTP URL。此功能与提供的反向代理功能相同 通过 NGINX 等 Web 服务器。

One of the key functions of an API gateway is request routing. An API gateway implements some API operations by routing requests to the corresponding service. When it receives a request, the API gateway consults a routing map that specifies which service to route the request to. A routing map might, for example, map an HTTP method and path to the HTTP URL of a service. This function is identical to the reverse proxying features provided by web servers such as NGINX.

API 组成

API 网关通常不仅仅是反向代理。它还可能使用 API 组合实现一些 API 操作。 例如,FTGO API 网关使用 API 组合实现 API 操作。如图 8.4 所示,移动应用程序向 API 网关发出一个请求,该网关从多个服务中获取订单详细信息。Get Order Details

An API gateway typically does more than simply reverse proxying. It might also implement some API operations using API composition. The FTGO API gateway, for example, implements the Get Order Details API operation using API composition. As figure 8.4 shows, the mobile application makes one request to the API gateway, which fetches the order details from multiple services.

图 8.4.API 网关通常执行 API 组合,这使客户端(如移动设备)能够使用 单个 API 请求。

FTGO API 网关提供了一个粗粒度的 API,使移动客户端能够使用单个 请求。例如,移动客户端向 API 网关发出单个请求。getOrderDetails()

The FTGO API gateway provides a coarse-grained API that enables mobile clients to retrieve the data they need with a single request. For example, the mobile client makes a single getOrderDetails() request to the API gateway.

协议转换

API 网关还可以执行协议转换。它可能会向外部客户端提供 RESTful API,即使 应用程序服务在内部混合使用多种协议,包括 REST 和 gRPC。需要时,会实施一些 API 操作在 RESTful 外部 API 和基于 gRPC 的内部 API 之间进行转换。

An API gateway might also perform protocol translation. It might provide a RESTful API to external clients, even though the application services use a mixture of protocols internally, including REST and gRPC. When needed, the implementation of some API operations translates between the RESTful external API and the internal gRPC-based APIs.

API 网关为每个客户端提供特定于客户端的 API

API 网关可以提供单个一刀切 (OSFA) API。单个 API 的问题在于不同的客户端 通常有不同的要求。例如,第三方应用程序可能需要 API 操作来返回完整的详细信息,而移动客户端只需要数据的子集。解决此问题的一种方法是为客户提供选项 在请求中指定服务器应返回哪些字段和相关对象。这种方法对于公众 API 必须为广泛的第三方应用程序提供服务,但它通常无法为客户提供所需的控制权。Get Order DetailsOrder

An API gateway could provide a single one-size-fits-all (OSFA) API. The problem with a single API is that different clients often have different requirements. For instance, a third-party application might require the Get Order Details API operation to return the complete Order details, whereas a mobile client only needs a subset of the data. One way to solve this problem is to give clients the option of specifying in a request which fields and related objects the server should return. This approach is adequate for a public API that must serve a broad range of third-party applications, but it often doesn’t give clients the control they need.

更好的方法是让 API 网关为每个客户端提供自己的 API。例如,FTGO API 网关可以提供 FTGO 移动客户端,带有专为满足其要求而设计的 API。它甚至可能具有不同的 API 适用于 Android 和 iPhone 移动应用程序。API 网关还将为第三方开发人员实施公共 API 使用。稍后,我将介绍前端的后端模式,该模式将每个客户端一个 API 的概念进一步发展 通过为每个客户端定义单独的 API 网关。

A better approach is for the API gateway to provide each client with its own API. For example, the FTGO API gateway can provide the FTGO mobile client with an API that’s specifically designed to meet its requirements. It may even have different APIs for the Android and iPhone mobile applications. The API gateway will also implement a public API for third-party developers to use. Later on, I’ll describe the Backends for frontends pattern that takes this concept of an API-per-client even further by defining a separate API gateway for each client.

实现边缘函数

尽管 API 网关的主要职责是 API 路由和组合,但它也可以实现所谓的 Edge 函数。顾名思义,边缘函数是在应用程序边缘实现的请求处理函数。边缘函数示例 应用程序可能实现的内容包括:

Although an API gateway’s primary responsibilities are API routing and composition, it may also implement what are known as edge functions. An edge function is, as the name suggests, a request-processing function implemented at the edge of an application. Examples of edge functions that an application might implement include the following:

  • 身份验证 - 验证发出请求的客户端的身份。
  • AuthenticationVerifying the identity of the client making the request.
  • 授权 - 验证客户端是否有权执行该特定操作。
  • AuthorizationVerifying that the client is authorized to perform that particular operation.
  • Rate limiting (速率限制) - 限制每秒来自特定客户端和/或所有客户端的请求数。
  • Rate limitingLimiting how many requests per second from either a specific client and/or from all clients.
  • 缓存 - 缓存响应以减少对服务发出的请求数。
  • CachingCache responses to reduce the number of requests made to the services.
  • Metrics collection (指标集合) - 收集有关 API 使用情况的指标,以便进行计费分析。
  • Metrics collectionCollect metrics on API usage for billing analytics purposes.
  • 请求日志记录记录请求。
  • Request loggingLog requests.

在您的应用程序中,您可以在三个不同的位置实现这些边缘功能。首先,您可以实现 它们在后端服务中。这可能对某些功能有意义,例如缓存、指标收集,并且可能 授权。但是,如果应用程序在请求到达 服务业。

There are three different places in your application where you could implement these edge functions. First, you can implement them in the backend services. This might make sense for some functions, such as caching, metrics collection, and possibly authorization. But it’s generally more secure if the application authenticates requests on the edge before they reach the services.

第二种选择是在 API 网关上游的边缘服务中实现这些边缘功能。边缘服务 是外部客户端的第一个联系点。它对请求进行身份验证,并在 将其传递给 API 网关。

The second option is to implement these edge functions in an edge service that’s upstream from the API gateway. The edge service is the first point of contact for an external client. It authenticates the request and performs other edge processing before passing it to the API gateway.

使用专用边缘服务的一个重要好处是它可以分离关注点。API 网关专注于 API 路由 和构图。另一个好处是,它集中了关键边缘功能(如身份验证)的责任。 当应用程序具有多个可能使用多种语言编写的 API 网关时,这一点尤其有价值 和框架。我稍后会详细讨论这个问题。这种方法的缺点是它会增加网络延迟,因为 的额外跃点。它还增加了应用程序的复杂性。

An important benefit of using a dedicated edge service is that it separates concerns. The API gateway focuses on API routing and composition. Another benefit is that it centralizes responsibility for critical edge functions such as authentication. That’s particularly valuable when an application has multiple API gateways that are possibly written using a variety of languages and frameworks. I’ll talk more about that later. The drawback of this approach is that it increases network latency because of the extra hop. It also adds to the complexity of the application.

因此,使用第三个选项并实现这些边缘功能,尤其是 authorization 通常很方便。 API 网关本身。少了一个网络跃点,从而改善了延迟。移动部件也更少,从而减少 复杂性。第 11 章描述了 API 网关和服务如何协作以实现安全性。

As a result, it’s often convenient to use the third option and implement these edge functions, especially authorization, in the API gateway itself. There’s one less network hop, which improves latency. There are also fewer moving parts, which reduces complexity. Chapter 11 describes how the API gateway and the services collaborate to implement security.

API 网关架构

API 网关具有分层的模块化架构。其架构如图 8.5 所示,由两层组成:API 层和公共层。API 层由一个或多个独立的 API 模块组成。 每个 API 模块都为特定客户端实现一个 API。公共层实现共享功能,包括身份验证等边缘功能。

An API gateway has a layered, modular architecture. Its architecture, shown in figure 8.5, consists of two layers: the API layer and a common layer. The API layer consists of one or more independent API modules. Each API module implements an API for a particular client. The common layer implements shared functionality, including edge functions such as authentication.

图 8.5.API 网关具有分层模块化架构。每个客户端的 API 由单独的模块实现。常见的 层实现所有 API 通用的功能,例如身份验证。

在此示例中,API 网关具有三个 API 模块:

In this example, the API gateway has three API modules:

  • 移动 API为 FTGO 移动客户端实施 API
  • Mobile APIImplements the API for the FTGO mobile client
  • 浏览器 API为浏览器中运行的 JavaScript 应用程序实现 API
  • Browser APIImplements the API for the JavaScript application running in the browser
  • 公共 API为第三方开发人员实施 API
  • Public APIImplements the API for third-party developers

API 模块通过以下两种方式之一实现每个 API 操作。某些 API 操作直接映射到单个服务 API 操作。 API 模块通过将请求路由到相应的服务 API 操作来实现这些操作。它可能会路由请求 使用通用路由模块,该模块读取描述路由规则的配置文件。

An API module implements each API operation in one of two ways. Some API operations map directly to single service API operation. An API module implements these operations by routing requests to the corresponding service API operation. It might route requests using a generic routing module that reads a configuration file describing the routing rules.

API 模块使用 API 组合实现其他更复杂的 API 操作。此 API 操作的实现 由自定义代码组成。每个 API 操作实现都通过调用多个服务并将 结果。

An API module implements other, more complex API operations using API composition. The implementation of this API operation consists of custom code. Each API operation implementation handles requests by invoking multiple services and combining the results.

API 网关所有权模型

您必须回答的一个重要问题是,谁负责 API 网关的开发及其运行? 有几种不同的选择。一种是让一个单独的团队负责 API 网关。这样做的缺点 是它类似于 SOA,在 SOA 中,企业服务总线 (ESB) 团队负责所有 ESB 开发。如果开发者 在移动应用程序上工作需要访问特定服务,他们必须向 API 网关团队提交请求 并等待他们公开 API。组织中的这种集中瓶颈与理念非常背道而驰 微服务架构,它促进了松散耦合的自治团队。

An important question that you must answer is who is responsible for the development of the API gateway and its operation? There are a few different options. One is for a separate team to be responsible for the API gateway. The drawback to that is that it’s similar to SOA, where an Enterprise Service Bus (ESB) team was responsible for all ESB development. If a developer working on the mobile application needs access to a particular service, they must submit a request to the API gateway team and wait for them to expose the API. This kind of centralized bottleneck in the organization is very much counter to the philosophy of the microservice architecture, which promotes loosely coupled autonomous teams.

Netflix 推广的更好的方法是让客户团队(移动、Web 和公共 API 团队)拥有 公开其 API 的 API 模块。API 网关团队负责开发模块和网关的操作方面。这种所有权模型(如图 8.6 所示)使团队能够控制他们的 API。Common

A better approach, which has been promoted by Netflix, is for the client teams—the mobile, web, and public API teams—to own the API module that exposes their API. An API gateway team is responsible for developing the Common module and for the operational aspects of the gateway. This ownership model, shown in figure 8.6, gives the teams control over their APIs.

图 8.6.客户团队拥有他们的 API 模块。当他们更改客户端时,他们可以更改 API 模块,而不询问 API 网关 团队进行更改。

当团队需要更改其 API 时,他们会将更改签入 API 网关的源存储库。要好好工作, API Gateway 的部署管道必须完全自动化。否则,客户团队通常会被阻止等待 API Gateway 团队部署新版本。

When a team needs to change their API, they check in the changes to the source repository for the API gateway. To work well, the API gateway’s deployment pipeline must be fully automated. Otherwise, the client teams will often be blocked waiting for the API gateway team to deploy the new version.

将 Backends 用于前端模式

API 网关的一个问题是它的责任很模糊。多个团队对同一个代码库做出贡献。 API 网关团队负责其运营。虽然没有 SOA ESB 那么糟糕,但这种职责的模糊确实 与“如果你构建它,你就拥有它”的微服务架构理念相反。

One concern with an API gateway is that responsibility for it is blurred. Multiple teams contribute to the same code base. An API gateway team is responsible for its operation. Though not as bad as a SOA ESB, this blurring of responsibilities is counter to the microservice architecture philosophy of “if you build it, you own it.”

解决方案是为每个客户端配备一个 API 网关,即所谓的前端后端 (BFF) 模式,这是开创性的 作者:Phil Calçado (http://philcalcado.com/) 和他在 SoundCloud 的同事。如图 8.7 所示,每个 API 模块都成为自己的独立 API 网关,由单个客户端团队开发和运营。

The solution is to have an API gateway for each client, the so-called Backends for frontends (BFF) pattern, which was pioneered by Phil Calçado (http://philcalcado.com/) and his colleagues at SoundCloud. As figure 8.7 shows, each API module becomes its own standalone API gateway that’s developed and operated by a single client team.

模式:前端的后端

为每种类型的客户端实施单独的 API 网关。请参阅 http://microservices.io/patterns/apigateway.html

Implement a separate API gateway for each type of client. See http://microservices.io/patterns/apigateway.html.

图 8.7.Backends for frontends 模式为每个客户端定义一个单独的 API 网关。每个客户团队都拥有自己的 API 网关。 API Gateway 团队拥有公共层。

公共 API 团队拥有并运营其 API 网关,移动团队拥有并运营其 API 网关,依此类推。理论上,不同的 可以使用不同的技术堆栈开发 API 网关。但这有可能为通用功能复制代码。 例如实现边缘函数的代码。理想情况下,所有 API 网关都使用相同的技术堆栈。通用功能 是由 API Gateway 团队实现的共享库。

The public API team owns and operates their API gateway, the mobile team owns and operates theirs, and so on. In theory, different API gateways could be developed using different technology stacks. But that risks duplicating code for common functionality, such as the code that implements edge functions. Ideally, all API gateways use the same technology stack. The common functionality is a shared library implemented by the API gateway team.

除了明确定义职责外,BFF 模式还有其他好处。API 模块彼此隔离, 这提高了可靠性。一个行为不当的 API 不会轻易影响其他 API。它还提高了可观测性,因为 API 模块是不同的进程。BFF 模式的另一个好处是每个 API 都可以独立扩展。The BFF 模式还可以减少启动时间,因为每个 API 网关都是一个更小、更简单的应用程序。

Besides clearly defining responsibilities, the BFF pattern has other benefits. The API modules are isolated from one another, which improves reliability. One misbehaving API can’t easily impact other APIs. It also improves observability, because different API modules are different processes. Another benefit of the BFF pattern is that each API is independently scalable. The BFF pattern also reduces startup time because each API gateway is a smaller, simpler application.

8.2.2. API 网关的优缺点

8.2.2. Benefits and drawbacks of an API gateway

如您所料,API 网关模式既有优点也有缺点。

As you might expect, the API gateway pattern has both benefits and drawbacks.

API 网关的优势

使用 API 网关的一个主要好处是它封装了应用程序的内部结构。而不是拥有 为了调用特定服务,客户端与网关通信。API 网关为每个客户端提供特定于客户端的 API, 这减少了客户端和应用程序之间的往返次数。它还简化了客户端代码。

A major benefit of using an API gateway is that it encapsulates internal structure of the application. Rather than having to invoke specific services, clients talk to the gateway. The API gateway provides each client with a client-specific API, which reduces the number of round-trips between the client and application. It also simplifies the client code.

API 网关的缺点

API 网关模式也有一些缺点。它是另一个必须开发、部署、 并管理。此外,API 网关还存在成为开发瓶颈的风险。开发人员必须更新 API 网关 来公开他们服务的 API。更新 API 网关的过程必须与 可能。否则,开发人员将被迫排队等待更新网关。尽管存在这些缺点,但 对于大多数实际应用程序,使用 API 网关是有意义的。如有必要,您可以将 Backends 用于前端 模式,使团队能够独立开发和部署其 API。

The API gateway pattern also has some drawbacks. It is yet another highly available component that must be developed, deployed, and managed. There’s also a risk that the API gateway becomes a development bottleneck. Developers must update the API gateway in order to expose their services’s API. It’s important that the process for updating the API gateway be as lightweight as possible. Otherwise, developers will be forced to wait in line in order to update the gateway. Despite these drawbacks, though, for most real-world applications, it makes sense to use an API gateway. If necessary, you can use the Backends for frontends pattern to enable the teams to develop and deploy their APIs independently.

8.2.3. Netflix 作为 API 网关的示例

8.2.3. Netflix as an example of an API gateway

API 网关的一个很好的示例是 Netflix API。Netflix 流媒体服务可在数百种不同的 各种设备,包括电视、蓝光播放器、智能手机和更多小工具。最初,Netflix 尝试 为其流媒体服务 (www.programmableweb.com/news/why-rest-keeps-me-night/2012/05/15) 提供一个万能风格的 API。但该公司很快发现,由于设备种类繁多,需求不同,这并不奏效。 如今,Netflix 使用 API 网关为每台设备实施单独的 API。客户端设备团队开发和拥有 API 实现。

A great example of an API gateway is the Netflix API. The Netflix streaming service is available on hundreds of different kinds of devices including televisions, Blu-ray players, smartphones, and many more gadgets. Initially, Netflix attempted to have a one-size-fits-all style API for its streaming service (www.programmableweb.com/news/why-rest-keeps-me-night/2012/05/15). But the company soon discovered that didn’t work well because of the diverse range of devices and their different needs. Today, Netflix uses an API gateway that implements a separate API for each device. The client device team develops and owns the API implementation.

在 API 网关的第一个版本中,每个客户团队都使用执行路由的 Groovy 脚本实现其 API 和 API 组合。每个脚本都使用服务团队提供的 Java 客户端库调用一个或多个服务 API。 一方面,这运行良好,并且客户端开发人员已经编写了数千个脚本。Netflix API 网关处理数十亿个 的请求数,平均每个 API 调用都会分散到 6 到 7 个后端服务。另一方面,Netflix 有 发现这种整体架构有点麻烦。

In the first version of the API gateway, each client team implemented their API using Groovy scripts that perform routing and API composition. Each script invoked one or more service APIs using Java client libraries provided by the service teams. On one hand, this works well, and client developers have written thousands of scripts. The Netflix API gateway handles billions of requests per day, and on average each API call fans out to six or seven backend services. On the other hand, Netflix has found this monolithic architecture to be somewhat cumbersome.

因此,Netflix 现在正在转向类似于 Backends for frontends 模式的 API 网关架构。在这个新的 架构中,客户团队使用 NodeJS 编写 API 模块。每个 API 模块都运行自己的 Docker 容器,但脚本 不要直接调用服务。相反,它们调用第二个“API 网关”,该网关使用 Netflix 公开服务 API 法尔科。Netflix Falcor 是一种 API 技术,它执行声明式、动态 API 组合,并使客户端能够使用单个请求调用多个服务。这种新架构有很多好处。API 模块彼此隔离, 这提高了可靠性和可观测性,并且客户端 API 模块是独立可扩展的。

As a result, Netflix is now moving to an API gateway architecture similar to the Backends for frontends pattern. In this new architecture, client teams write API modules using NodeJS. Each API module runs its own Docker container, but the scripts don’t invoke the services directly. Rather, they invoke a second “API gateway,” which exposes the service APIs using Netflix Falcor. Netflix Falcor is an API technology that does declarative, dynamic API composition and enables a client to invoke multiple services using a single request. This new architecture has a number of benefits. The API modules are isolated from one another, which improves reliability and observability, and the client API module is independently scalable.

8.2.4. API 网关设计问题

8.2.4. API gateway design issues

现在我们已经了解了 API 网关模式及其优缺点,让我们看看各种 API 网关设计 问题。在设计 API 网关时,需要考虑几个问题:

Now that we’ve looked at the API gateway pattern and its benefits and drawbacks, let’s examine various API gateway design issues. There are several issues to consider when designing an API gateway:

  • 性能和可扩展性
  • Performance and scalability
  • 使用反应式编程抽象编写可维护的代码
  • Writing maintainable code by using reactive programming abstractions
  • 处理部分故障
  • Handling partial failure
  • 在应用程序架构中成为好公民
  • Being a good citizen in the application’s architecture

我们将逐一介绍。

We’ll look at each one.

性能和可扩展性

API 网关是应用程序的前门。所有外部请求必须首先通过网关。尽管大多数公司 不要以 Netflix 的规模运行,Netflix 每天处理数十亿个请求,以及 API 网关通常非常重要。影响性能和可扩展性的关键设计决策是 API 网关 应使用同步或异步 I/O。

An API gateway is the application’s front door. All external requests must first pass through the gateway. Although most companies don’t operate at the scale of Netflix, which handles billions of requests per day, the performance and scalability of the API gateway is usually very important. A key design decision that affects performance and scalability is whether the API gateway should use synchronous or asynchronous I/O.

同步 I/O 模型中,每个网络连接都由专用线程处理。这是一个简单的编程模型,工作合理 井。例如,它是广泛使用的 Java EE servlet 框架的基础,尽管此框架提供了选项 异步完成请求。但是,同步 I/O 的一个限制是操作系统线程是重量级的, 因此,API 网关可以具有的线程数和并发连接数存在限制。

In the synchronous I/O model, each network connection is handled by a dedicated thread. This is a simple programming model and works reasonably well. For example, it’s the basis of the widely used Java EE servlet framework, although this framework provides the option of completing a request asynchronously. One limitation of synchronous I/O, however, is that operating system threads are heavyweight, so there is a limit on the number of threads, and hence concurrent connections, that an API gateway can have.

另一种方法是使用异步(非阻塞)I/O 模型。在此模型中,单个事件循环线程将 I/O 请求调度到事件处理程序。您有一个 多种异步 I/O 技术可供选择。在 JVM 上,您可以使用基于 NIO 的框架之一,例如 Netty、 Vertx、Spring Reactor 或 JBoss Undertow。一个流行的非 JVM 选项是 NodeJS,这是一个基于 Chrome 的 JavaScript 引擎构建的平台。

The other approach is to use the asynchronous (nonblocking) I/O model. In this model, a single event loop thread dispatches I/O requests to event handlers. You have a variety of asynchronous I/O technologies to choose from. On the JVM you can use one of the NIO-based frameworks such as Netty, Vertx, Spring Reactor, or JBoss Undertow. One popular non-JVM option is NodeJS, a platform built on Chrome’s JavaScript engine.

非阻塞 I/O 的可扩展性要强得多,因为它没有使用多个线程的开销。不过,缺点是 是异步的、基于回调的编程模型要复杂得多。代码更难写、更难理解、 和 debug。事件处理程序必须快速返回,以避免阻塞事件循环线程。

Nonblocking I/O is much more scalable because it doesn’t have the overhead of using multiple threads. The drawback, though, is that the asynchronous, callback-based programming model is much more complex. The code is more difficult to write, understand, and debug. Event handlers must return quickly to avoid blocking the event loop thread.

此外,使用非阻塞 I/O 是否具有有意义的整体好处取决于 API 网关的请求处理的特征 逻辑。Netflix 在重写其边缘服务器 Zuul 以使用 NIO 时,结果喜忧参半(参见 https://medium.com/netflix-techblog/zuul-2-the-netflix-journey-to-asynchronous-non-blocking-systems-45947377fb5c)。一方面,正如您所料,使用 NIO 降低了每个网络连接的成本,因为不再有 每个 VPN 都有一个专用线程。此外,运行 I/O 密集型逻辑(例如请求路由)的 Zuul 集群增加了 25% 吞吐量和 CPU 利用率降低 25%。另一方面,运行 CPU 密集型逻辑的 Zuul 集群,例如 作为解密和压缩 — 没有显示出任何改善。

Also, whether using nonblocking I/O has a meaningful overall benefit depends on the characteristics of the API gateway’s request-processing logic. Netflix had mixed results when it rewrote Zuul, its edge server, to use NIO (see https://medium.com/netflix-techblog/zuul-2-the-netflix-journey-to-asynchronous-non-blocking-systems-45947377fb5c). On one hand, as you would expect, using NIO reduced the cost of each network connection, due to the fact that there’s no longer a dedicated thread for each one. Also, a Zuul cluster that ran I/O-intensive logic—such as request routing—had a 25% increase in throughput and a 25% reduction in CPU utilization. On the other hand, a Zuul cluster that ran CPU-intensive logic—such as decryption and compression—showed no improvement.

使用反应式编程抽象

如前所述,API 组合包括调用多个后端服务。一些后端服务请求依赖于 完全取决于客户端请求的参数。其他请求可能依赖于其他服务请求的结果。一种方法是 ,以按照依赖项确定的顺序调用服务。例如,以下 listing 显示以这种方式编写的请求的处理程序。它一个接一个地调用这四个服务中的每一个。findOrder()

As mentioned earlier, API composition consists of invoking multiple backend services. Some backend service requests depend entirely on the client request’s parameters. Others might depend on the results of other service requests. One approach is for an API endpoint handler method to call the services in the order determined by the dependencies. For example, the following listing shows the handler for the findOrder() request that’s written this way. It calls each of the four services, one after the other.

清单 8.1.通过按顺序调用后端服务来获取订单详细信息
@RestController
public class OrderDetailsController {
@RequestMapping("/order/{orderId}")
public OrderDetails getOrderDetails(@PathVariable String orderId) {

  OrderInfo orderInfo = orderService.findOrderById(orderId);

  TicketInfo ticketInfo = kitchenService
          .findTicketByOrderId(orderId);

  DeliveryInfo deliveryInfo = deliveryService
          .findDeliveryByOrderId(orderId);

  BillInfo billInfo = accountingService
          .findBillByOrderId(orderId);

  OrderDetails orderDetails =
       OrderDetails.makeOrderDetails(orderInfo, ticketInfo,
                                     deliveryInfo, billInfo);

  return orderDetails;
}
...
@RestController
public class OrderDetailsController {
@RequestMapping("/order/{orderId}")
public OrderDetails getOrderDetails(@PathVariable String orderId) {

  OrderInfo orderInfo = orderService.findOrderById(orderId);

  TicketInfo ticketInfo = kitchenService
          .findTicketByOrderId(orderId);

  DeliveryInfo deliveryInfo = deliveryService
          .findDeliveryByOrderId(orderId);

  BillInfo billInfo = accountingService
          .findBillByOrderId(orderId);

  OrderDetails orderDetails =
       OrderDetails.makeOrderDetails(orderInfo, ticketInfo,
                                     deliveryInfo, billInfo);

  return orderDetails;
}
...

按顺序调用服务的缺点是响应时间是服务响应时间的总和。挨次 为了最大限度地缩短响应时间,组合逻辑应尽可能并发调用 Services。在此示例中, 服务调用之间没有依赖关系。所有服务都应该并发调用,这大大减少了 响应时间。挑战在于编写可维护的并发代码。

The drawback of calling the services sequentially is that the response time is the sum of the service response times. In order to minimize response time, the composition logic should, whenever possible, invoke services concurrently. In this example, there are no dependencies between the service calls. All services should be invoked concurrently, which significantly reduces response time. The challenge is to write concurrent code that’s maintainable.

这是因为编写可扩展的并发代码的传统方法是使用回调。异步、事件驱动的 I/O 本质上是基于回调的。即使是基于 Servlet API 的 API 编写器,并发调用服务,通常也会使用回调。它可以并发执行请求 通过调用 .问题是此方法返回一个 ,它有一个阻塞 API。更具可扩展性的方法是让 API Composer 调用,并让每个 API Composer 使用请求的结果调用回调。回调会累积结果,并在收到所有结果后 它将响应发送回客户端。ExecutorService.submitCallable()FutureExecutorService.submit (Runnable)Runnable

This is because the traditional way to write scalable, concurrent code is to use callbacks. Asynchronous, event-driven I/O is inherently callback-based. Even a Servlet API-based API composer that invokes services concurrently typically uses callbacks. It could execute requests concurrently by calling ExecutorService.submitCallable(). The problem there is that this method returns a Future, which has a blocking API. A more scalable approach is for an API composer to call ExecutorService.submit (Runnable) and for each Runnable to invoke a callback with the outcome of the request. The callback accumulates results, and once all of them have been received it sends back the response to the client.

使用传统的异步回调方法编写 API 组合代码很快就会让你陷入回调地狱。这 代码会纠缠不清、难以理解且容易出错,尤其是当组合需要并行 和顺序请求。更好的方法是使用反应式方法以声明式方式编写 API 组合代码。 JVM 的反应式抽象示例包括:

Writing API composition code using the traditional asynchronous callback approach quickly leads you to callback hell. The code will be tangled, difficult to understand, and error prone, especially when composition requires a mixture of parallel and sequential requests. A much better approach is to write API composition code in a declarative style using a reactive approach. Examples of reactive abstractions for the JVM include the following:

  • Java 8CompletableFutures
  • Java 8 CompletableFutures
  • 项目 ReactorMonos
  • Project Reactor Monos
  • RxJava (Reactive Extensions for Java) ,由 Netflix 专门创建,用于在其 API 网关中解决此问题Observables
  • RxJava (Reactive Extensions for Java) Observables, created by Netflix specifically to solve this problem in its API gateway
  • 斯卡拉Futures
  • Scala Futures

基于 NodeJS 的 API 网关将使用 JavaScript 承诺或 RxJS,后者是 JavaScript 的反应式扩展。使用 这些反应式抽象将使您能够编写简单易懂的并发代码。在本章的后面, 我展示了一个使用 Project Reactor 和 Spring Framework 版本 5 的这种编码方式的示例。Monos

A NodeJS-based API gateway would use JavaScript promises or RxJS, which is reactive extensions for JavaScript. Using one of these reactive abstractions will enable you to write concurrent code that’s simple and easy to understand. Later in this chapter, I show an example of this style of coding using Project Reactor Monos and version 5 of the Spring Framework.

处理部分故障

除了可扩展之外,API 网关还必须可靠。实现可靠性的一种方法是运行多个实例 负载均衡器后面的网关。如果一个实例发生故障,负载均衡器会将请求路由到其他实例。

As well as being scalable, an API gateway must also be reliable. One way to achieve reliability is to run multiple instances of the gateway behind a load balancer. If one instance fails, the load balancer will route requests to the other instances.

确保 API 网关可靠的另一种方法是正确处理失败的请求和具有不可接受的 高延迟。当 API 网关调用服务时,该服务始终有可能运行缓慢或不可用。一个 API 网关可能会等待很长时间,也许是无限期的等待响应,这会消耗资源并阻止它发送 对其客户的回应。对失败服务的未完成请求甚至可能会消耗有限的宝贵资源,例如 作为线程,最终导致 API Gateway 无法处理任何其他请求。解决方案,如前所述 是让 API 网关在调用服务时使用 Circuit breaker 模式。

Another way to ensure that an API gateway is reliable is to properly handle failed requests and requests that have unacceptably high latency. When an API gateway invokes a service, there’s always a chance that the service is slow or unavailable. An API gateway may wait a very long time, perhaps indefinitely, for a response, which consumes resources and prevents it from sending a response to its client. An outstanding request to a failed service might even consume a limited, precious resource such as a thread and ultimately result in the API gateway being unable to handle any other requests. The solution, as described in chapter 3, is for an API gateway to use the Circuit breaker pattern when invoking services.

在建筑中做一个好公民

第 3 章中,我介绍了服务发现的模式,在第 11 章中,我介绍了可观测性的模式。服务发现模式使服务客户端(如 API 网关)能够确定 Service 实例的网络位置,以便它可以调用它。可观测性模式使开发人员能够监控 应用程序的行为并排查问题。与架构中的其他服务一样,API 网关必须实现 为架构选择的模式。

In chapter 3 I described patterns for service discovery, and in chapter 11, I cover patterns for observability. The service discovery patterns enable a service client, such as an API gateway, to determine the network location of a service instance so that it can invoke it. The observability patterns enable developers to monitor the behavior of an application and troubleshoot problems. An API gateway, like other services in the architecture, must implement the patterns that have been selected for the architecture.

8.3. 实现 API 网关

8.3. Implementing an API gateway

现在让我们看看如何实现 API 网关。如前所述,API 网关的职责如下:

Let’s now look at how to implement an API gateway. As mentioned earlier, the responsibilities of an API gateway are as follows:

  • 请求路由使用 HTTP 请求方法和路径等条件将请求路由到服务。API 网关必须使用 HTTP request 方法。如第 7 章所述,在这样的架构中,命令和查询由单独的服务处理。
  • Request routingRoutes requests to services using criteria such as HTTP request method and path. The API gateway must route using the HTTP request method when the application has one or more CQRS query services. As discussed in chapter 7, in such an architecture commands and queries are handled by separate services.
  • API 组成使用 API 组合模式实现 REST 端点,如第 7 章所述。请求处理程序将调用多个服务的结果组合在一起。GET
  • API compositionImplements a GET REST endpoint using the API composition pattern, described in chapter 7. The request handler combines the results of invoking multiple services.
  • Edge 函数其中最值得注意的是身份验证。
  • Edge functionsMost notable among these is authentication.
  • 协议转换在客户端友好协议和服务使用的客户端不友好协议之间进行转换。
  • Protocol translationTranslates between client-friendly protocols and the client-unfriendly protocols used by services.
  • 在应用程序架构中成为好公民。
  • Being a good citizen in the application’s architecture.

有几种不同的方法可以实施 API 网关:

There are a couple of different ways to implement an API gateway:

  • 使用现成的 API 网关产品/服务此选项需要很少或不需要开发,但灵活性最低。例如,现成的 API 网关通常 不支持 API 组合
  • Using an off-the-shelf API gateway product/serviceThis option requires little or no development but is the least flexible. For example, an off-the-shelf API gateway typically does not support API composition
  • 使用 API 网关框架或 Web 框架作为起点开发您自己的 API 网关这是最灵活的方法,尽管它需要一些开发工作。
  • Developing your own API gateway using either an API gateway framework or a web framework as the starting pointThis is the most flexible approach, though it requires some development effort.

让我们看看这些选项,从使用现成的 API 网关产品或服务开始。

Let’s look at these options, starting with using an off-the-shelf API gateway product or service.

8.3.1. 使用现成的 API 网关产品/服务

8.3.1. Using an off-the-shelf API gateway product/service

一些 off-theself 服务和产品实现了 API Gateway 功能。让我们首先看一下几个服务,它们是 由 AWS 提供。之后,我将讨论一些您可以自行下载、配置和运行的产品。

Several off-the-self services and products implement API gateway features. Let’s first look at a couple of services that are provided by AWS. After that, I’ll discuss some products that you can download, configure, and run yourself.

AWS API 网关

AWS API 网关是 Amazon Web Services 提供的众多服务之一,是一项用于部署和管理 API 的服务。 AWS API Gateway API 是一组 REST 资源,每个资源都支持一个或多个 HTTP 方法。您配置 API gateway 将每个路由到后端服务。后端服务可以是 AWS Lambda 函数(将在第 12 章后面介绍)、应用程序定义的 HTTP 服务,也可以是 AWS 服务。如有必要,您可以将 API 网关配置为使用基于模板的机制转换请求和响应。AWS API Gateway 还可以对请求进行身份验证。(Method, Resource)

The AWS API gateway, one of the many services provided by Amazon Web Services, is a service for deploying and managing APIs. An AWS API gateway API is a set of REST resources, each of which supports one or more HTTP methods. You configure the API gateway to route each (Method, Resource) to a backend service. A backend service is either an AWS Lambda Function, described later in chapter 12, an application-defined HTTP service, or an AWS service. If necessary, you can configure the API gateway to transform request and response using a template-based mechanism. The AWS API gateway can also authenticate requests.

AWS API 网关满足我之前列出的 API 网关的一些要求。提供 API 网关 由 AWS 提供,因此您无需负责安装和操作。您配置 API 网关,AWS 会处理所有事情 否则,包括缩放。

The AWS API gateway fulfills some of the requirements for an API gateway that I listed earlier. The API gateway is provided by AWS, so you’re not responsible for installation and operations. You configure the API gateway, and AWS handles everything else, including scaling.

不幸的是,AWS API 网关有几个缺点和限制,导致它无法满足其他要求。 它不支持 API 组合,因此您需要在后端服务中实现 API 组合。AWS API 网关 仅支持 HTTP(S),并且非常强调 JSON。它仅支持第 3 章中描述的服务器端发现模式。应用程序通常会使用 AWS Elastic Load Balancer 在一组 EC2 实例或 ECS 容器。尽管存在这些限制,但除非您需要 API 组合,否则 AWS API 网关是 API 网关模式。

Unfortunately, the AWS API gateway has several drawbacks and limitations that cause it to not fulfill other requirements. It doesn’t support API composition, so you’d need to implement API composition in the backend services. The AWS API gateway only supports HTTP(S) with a heavy emphasis on JSON. It only supports the Server-side discovery pattern, described in chapter 3. An application will typically use an AWS Elastic Load Balancer to load balance requests across a set of EC2 instances or ECS containers. Despite these limitations, unless you need API composition, the AWS API gateway is a good implementation of the API gateway pattern.

AWS Application Load Balancer

另一个提供类似 API 网关功能的 AWS 服务是 AWS Application Load Balancer,它是一个负载均衡器 适用于 HTTP、HTTPS、WebSocket 和 HTTP/2 (https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/)。在配置 Application Load Balancer 时,您可以定义将请求路由到后端服务的路由规则,这些 必须在 AWS EC2 实例上运行。

Another AWS service that provides API gateway-like functionality is the AWS Application Load Balancer, which is a load balancer for HTTP, HTTPS, WebSocket, and HTTP/2 (https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/). When configuring an Application Load Balancer, you define routing rules that route requests to backend services, which must be running on AWS EC2 instances.

与 AWS API 网关一样,AWS Application Load Balancer 满足 API 网关的一些要求。它实现了 基本路由功能。它是托管的,因此您无需负责安装或操作。不幸的是,这相当 有限。它不实现基于 HTTP 方法的路由。它也不实现 API 组合或身份验证。因此, AWS Application Load Balancer 不满足 API 网关的要求。

Like the AWS API gateway, the AWS Application Load Balancer meets some of the requirements for an API gateway. It implements basic routing functionality. It’s hosted, so you’re not responsible for installation or operations. Unfortunately, it’s quite limited. It doesn’t implement HTTP method-based routing. Nor does it implement API composition or authentication. As a result, the AWS Application Load Balancer doesn’t meet the requirements for an API gateway.

使用 API 网关产品

另一种选择是使用 API 网关产品,例如 Kong 或 Traefik。这些是您安装的开源软件包,并且 自己操作。Kong 基于 NGINX HTTP 服务器,而 Traefik 是用 GoLang 编写的。这两款产品都允许您配置 灵活的路由规则,使用 HTTP 方法、标头和路径来选择后端服务。Kong 允许您配置 实现 Edge 功能(如身份验证)的插件。Traefik 甚至可以与一些服务注册表集成,如 在第 3 章中。

Another option is to use an API gateway product such as Kong or Traefik. These are open source packages that you install and operate yourself. Kong is based on the NGINX HTTP server, and Traefik is written in GoLang. Both products let you configure flexible routing rules that use the HTTP method, headers, and path to select the backend service. Kong lets you configure plugins that implement edge functions such as authentication. Traefik can even integrate with some service registries, described in chapter 3.

尽管这些产品实现了边缘功能和强大的路由功能,但它们也存在一些缺点。您必须安装 自行配置和操作它们。它们不支持 API 组合。如果您希望 API 网关执行 API 组合, 您必须开发自己的 API 网关。

Although these products implement edge functions and powerful routing capabilities, they have some drawbacks. You must install, configure, and operate them yourself. They don’t support API composition. And if you want the API gateway to perform API composition, you must develop your own API gateway.

8.3.2. 开发您自己的 API 网关

8.3.2. Developing your own API gateway

开发 API 网关并不是特别困难。它基本上是一个将请求代理到其他服务的 Web 应用程序。 您可以使用自己喜欢的 Web 框架构建一个。但是,您需要解决两个关键设计问题:

Developing an API gateway isn’t particularly difficult. It’s basically a web application that proxies requests to other services. You can build one using your favorite web framework. There are, however, two key design problems that you’ll need to solve:

  • 实现定义路由规则的机制,以最大限度地减少复杂的编码
  • Implementing a mechanism for defining routing rules in order to minimize the complex coding
  • 正确实现 HTTP 代理行为,包括如何处理 HTTP 标头
  • Correctly implementing the HTTP proxying behavior, including how HTTP headers are handled

因此,开发 API 网关的更好起点是使用专为此目的设计的框架。它的内置 功能显著减少了您需要编写的代码量。

Consequently, a better starting point for developing an API gateway is to use a framework designed for that purpose. Its built-in functionality significantly reduces the amount of code you need to write.

我们将看一下 Netflix Zuul,这是 Netflix 的一个开源项目,然后考虑 Spring Cloud Gateway,一个开放的 source 项目。

We’ll take a look at Netflix Zuul, an open source project by Netflix, and then consider the Spring Cloud Gateway, an open source project from Pivotal.

使用 Netflix Zuul

Netflix 开发了 Zuul 框架来实现边缘功能,例如路由、速率限制和身份验证 (https://github.com/Netflix/zuul)。Zuul 框架使用过滤器的概念,即类似于 servlet 过滤器或 NodeJS Express 中间件的可重用请求拦截器。Zuul 处理 HTTP 请求 通过组装一系列适用的过滤器,然后转换请求、调用后端服务并转换响应 在发送回客户端之前。虽然您可以直接使用 Zuul,但使用 Spring Cloud Zuul(来自 Pivotal 要容易得多。Spring Cloud Zuul 基于 Zuul 构建,并通过约定优于配置使开发基于 Zuul 的 服务器非常简单。

Netflix developed the Zuul framework to implement edge functions such as routing, rate limiting, and authentication (https://github.com/Netflix/zuul). The Zuul framework uses the concept of filters, reusable request interceptors that are similar to servlet filters or NodeJS Express middleware. Zuul handles an HTTP request by assembling a chain of applicable filters that then transform the request, invoke backend services, and transform the response before it’s sent back to the client. Although you can use Zuul directly, using Spring Cloud Zuul, an open source project from Pivotal, is far easier. Spring Cloud Zuul builds on Zuul and through convention-over-configuration makes developing a Zuul-based server remarkably easy.

Zuul 处理路由和边缘功能。你可以通过定义实现 API 的 Spring MVC 控制器来扩展 Zuul 组成。但是 Zuul 的一个主要限制是它只能实现基于路径的路由。例如,它是 incapable 路由到一个服务和另一个服务。因此,Zuul 不支持第 7 章中描述的查询架构。GET /ordersPOST /orders

Zuul handles the routing and edge functionality. You can extend Zuul by defining Spring MVC controllers that implement API composition. But a major limitation of Zuul is that it can only implement path-based routing. For example, it’s incapable of routing GET /orders to one service and POST /orders to a different service. Consequently, Zuul doesn’t support the query architecture described in chapter 7.

关于 Spring Cloud 网关

到目前为止,我描述的选项都不满足所有要求。事实上,我已经放弃了寻找 API 网关 框架,并开始开发基于 Spring MVC 的 API 网关。但后来我发现了 Spring Cloud Gateway 项目 (https://cloud.spring.io/spring-cloud-gateway/)。它是一个构建在多个框架之上的 API 网关框架,包括 Spring Framework 5、Spring Boot 2 和 Spring Webflux,它是一个反应式 Web 框架,是 Spring Framework 5 的一部分,基于 Project Reactor 构建。项目 Reactor 是一个基于 NIO 的 JVM 反应式框架,它提供了本章后面使用的 Mono 抽象。

None of the options I’ve described so far meet all the requirements. In fact, I had given up in my search for an API gateway framework and had started developing an API gateway based on Spring MVC. But then I discovered the Spring Cloud Gateway project (https://cloud.spring.io/spring-cloud-gateway/). It’s an API gateway framework built on top of several frameworks, including Spring Framework 5, Spring Boot 2, and Spring Webflux, which is a reactive web framework that’s part of Spring Framework 5 and built on Project Reactor. Project Reactor is an NIO-based reactive framework for the JVM that provides the Mono abstraction used a little later in this chapter.

Spring Cloud 网关提供了一种简单而全面的方法来执行以下操作:

Spring Cloud Gateway provides a simple yet comprehensive way to do the following:

  • 将请求路由到后端服务。
  • Route requests to backend services.
  • 实现执行 API 组合的请求处理程序。
  • Implement request handlers that perform API composition.
  • 处理边缘功能,例如身份验证。
  • Handle edge functions such as authentication.

图 8.8 显示了使用此框架构建的 API 网关的关键部分。

Figure 8.8 shows the key parts of an API gateway built using this framework.

图 8.8.使用 Spring Cloud 网关构建的 API 网关的架构

API 网关由以下软件包组成:

The API gateway consists of the following packages:

  • ApiGatewayMain package - 定义 API 网关的 Main 程序。
  • ApiGatewayMain package—Defines the Main program for the API gateway.
  • 一个或多个 API 包 - 一个 API 包实现一组 API 终端节点。例如,该软件包实现与 相关的 API 端点。OrdersOrder
  • One or more API packages—An API package implements a set of API endpoints. For example, the Orders package implements the Order-related API endpoints.
  • 代理包 — 由 API 包用于调用服务的代理类组成。
  • Proxy package—Consists of proxy classes that are used by the API packages to invoke the services.

该类定义负责 routing 相关请求的 Spring bean。路由规则可以与 HTTP 方法、标头和路径的某种组合进行匹配。定义将 API 操作映射到后端服务 URL 的规则。例如,它将以 开头的路径路由到 .OrderConfigurationOrderorderProxyRoutes @Bean/ordersOrder Service

The OrderConfiguration class defines the Spring beans responsible for routing Order-related requests. A routing rule can match against some combination of the HTTP method, the headers, and the path. The orderProxyRoutes @Bean defines rules that map API operations to backend service URLs. For example, it routes paths beginning with /orders to the Order Service.

定义规则将覆盖由 定义的规则。这些规则将 API 操作映射到处理程序方法,这些方法相当于 Spring WebFlux 的 Spring MVC 控制器方法。 例如,将操作映射到方法。orderHandlers @BeanorderProxyRoutesorderHandlersGET /orders/{orderId}OrderHandlers::getOrderDetails()

The orderHandlers @Bean defines rules that override those defined by orderProxyRoutes. These rules map API operations to handler methods, which are the Spring WebFlux equivalent of Spring MVC controller methods. For example, orderHandlers maps the operation GET /orders/{orderId} to the OrderHandlers::getOrderDetails() method.

该类实现各种请求处理程序方法,例如 .此方法使用 API 组合来获取订单详细信息(如前所述)。handle 方法调用后端服务 使用远程代理类,例如 .此类定义用于调用 .OrderHandlersOrderHandlers::getOrderDetails()OrderServiceOrderService

The OrderHandlers class implements various request handler methods, such as OrderHandlers::getOrderDetails(). This method uses API composition to fetch the order details (described earlier). The handle methods invoke backend services using remote proxy classes, such as OrderService. This class defines methods for invoking the OrderService.

让我们看一下代码,从 class 开始。OrderConfiguration

Let’s take a look at the code, starting with the OrderConfiguration class.

OrderConfiguration 类

如清单 8.2 所示的类是一个 Spring 类。它定义了实现端点的 Spring。和 使用 Spring WebFlux 路由 DSL 来定义请求路由。实现执行 API 组合的请求处理程序。OrderConfiguration@Configuration@Beans/ordersorderProxyRoutingorderHandlerRouting @BeansorderHandlers @Bean

The OrderConfiguration class, shown in listing 8.2, is a Spring @Configuration class. It defines the Spring @Beans that implement the /orders endpoints. The orderProxyRouting and orderHandlerRouting @Beans use the Spring WebFlux routing DSL to define the request routing. The orderHandlers @Bean implements the request handlers that perform API composition.

清单 8.2.实现端点的 Spring@Beans/orders
@Configuration
@EnableConfigurationProperties(OrderDestinations.class)
public class OrderConfiguration {

  @Bean
  public RouteLocator orderProxyRouting(OrderDestinations orderDestinations) {
    return Routes.locator()
            .route("orders")
            .uri(orderDestinations.orderServiceUrl)
            .predicate(path("/orders").or(path("/orders/*")))             1
             .and()
            ...
            .build();
  }

  @Bean
  public RouterFunction<ServerResponse>
             orderHandlerRouting(OrderHandlers orderHandlers) {
    return RouterFunctions.route(GET("/orders/{orderId}"),                2
                       orderHandlers::getOrderDetails);
  }

  @Bean
  public OrderHandlers orderHandlers(OrderService orderService,
                               KitchenService kitchenService,
                               DeliveryService deliveryService,
                               AccountingService accountingService) {
    return new OrderHandlers(orderService, kitchenService,                3
                              deliveryService, accountingService);
  }

}
@Configuration
@EnableConfigurationProperties(OrderDestinations.class)
public class OrderConfiguration {

  @Bean
  public RouteLocator orderProxyRouting(OrderDestinations orderDestinations) {
    return Routes.locator()
            .route("orders")
            .uri(orderDestinations.orderServiceUrl)
            .predicate(path("/orders").or(path("/orders/*")))             1
             .and()
            ...
            .build();
  }

  @Bean
  public RouterFunction<ServerResponse>
             orderHandlerRouting(OrderHandlers orderHandlers) {
    return RouterFunctions.route(GET("/orders/{orderId}"),                2
                       orderHandlers::getOrderDetails);
  }

  @Bean
  public OrderHandlers orderHandlers(OrderService orderService,
                               KitchenService kitchenService,
                               DeliveryService deliveryService,
                               AccountingService accountingService) {
    return new OrderHandlers(orderService, kitchenService,                3
                              deliveryService, accountingService);
  }

}

  • 1 默认情况下,将路径以 /orders 开头的所有请求路由到 URL orderDestinations.orderServiceUrl。
  • 1 By default, route all requests whose path begins with /orders to the URL orderDestinations.orderServiceUrl.
  • 2 将 GET /orders/{orderId} 路由到 orderHandlers::getOrderDetails。
  • 2 Route a GET /orders/{orderId} to orderHandlers::getOrderDetails.
  • 3 @Bean,用于实现自定义请求处理逻辑
  • 3 The @Bean, which implements the custom request-handling logic

OrderDestinations,如下面的清单所示,是一个 Spring 类,它支持后端服务 URL 的外部化配置。@ConfigurationProperties

OrderDestinations, shown in the following listing, is a Spring @ConfigurationProperties class that enables the externalized configuration of backend service URLs.

清单 8.3.后端服务 URL 的外部化配置
@ConfigurationProperties(prefix = "order.destinations")
public class OrderDestinations {

  @NotNull
  public String orderServiceUrl;

  public String getOrderServiceUrl() {
    return orderServiceUrl;
  }

  public void setOrderServiceUrl(String orderServiceUrl) {
    this.orderServiceUrl = orderServiceUrl;
  }
  ...
}
@ConfigurationProperties(prefix = "order.destinations")
public class OrderDestinations {

  @NotNull
  public String orderServiceUrl;

  public String getOrderServiceUrl() {
    return orderServiceUrl;
  }

  public void setOrderServiceUrl(String orderServiceUrl) {
    this.orderServiceUrl = orderServiceUrl;
  }
  ...
}

例如,可以将 的 指定为属性文件中的属性或操作系统环境变量 。URLOrder Serviceorder.destinations.orderServiceUrlORDER_DESTINATIONS_ORDER_SERVICE_URL

You can, for example, specify the URL of the Order Service either as the order.destinations.orderServiceUrl property in a properties file or as an operating system environment variable, ORDER_DESTINATIONS_ORDER_SERVICE_URL.

OrderHandlers 类

该类(如下面的清单所示)定义实现自定义行为(包括 API)的请求处理程序方法 组成。例如,该方法执行 API 组合以检索有关订单的信息。这个类注入了几个 向后端服务发出请求的代理类。OrderHandlersgetOrderDetails()

The OrderHandlers class, shown in the following listing, defines the request handler methods that implement custom behavior, including API composition. The getOrderDetails() method, for example, performs API composition to retrieve information about an order. This class is injected with several proxy classes that make requests to backend services.

清单 8.4.该类实现自定义请求处理逻辑。OrderHandlers
public class OrderHandlers {

  private OrderService orderService;
  private KitchenService kitchenService;
  private DeliveryService deliveryService;
  private AccountingService accountingService;

  public OrderHandlers(OrderService orderService,
                       KitchenService kitchenService,
                       DeliveryService deliveryService,
                       AccountingService accountingService) {
    this.orderService = orderService;
    this.kitchenService = kitchenService;
    this.deliveryService = deliveryService;
    this.accountingService = accountingService;
  }

  public Mono<ServerResponse> getOrderDetails(ServerRequest serverRequest) {
    String orderId = serverRequest.pathVariable("orderId");

    Mono<OrderInfo> orderInfo = orderService.findOrderById(orderId);

    Mono<Optional<TicketInfo>> ticketInfo =
       kitchenService
            .findTicketByOrderId(orderId)
            .map(Optional::of)                                      1
             .onErrorReturn(Optional.empty());                      2

    Mono<Optional<DeliveryInfo>> deliveryInfo =
        deliveryService
            .findDeliveryByOrderId(orderId)
            .map(Optional::of)
            .onErrorReturn(Optional.empty());

    Mono<Optional<BillInfo>> billInfo = accountingService
            .findBillByOrderId(orderId)
            .map(Optional::of)
            .onErrorReturn(Optional.empty());

    Mono<Tuple4<OrderInfo, Optional<TicketInfo>,                    3
                 Optional<DeliveryInfo>, Optional<BillInfo>>> combined =
            Mono.when(orderInfo, ticketInfo, deliveryInfo, billInfo);

    Mono<OrderDetails> orderDetails =                               4
         combined.map(OrderDetails::makeOrderDetails);

    return orderDetails.flatMap(person -> ServerResponse.ok()       5
             .contentType(MediaType.APPLICATION_JSON)
            .body(fromObject(person)));
  }

}
public class OrderHandlers {

  private OrderService orderService;
  private KitchenService kitchenService;
  private DeliveryService deliveryService;
  private AccountingService accountingService;

  public OrderHandlers(OrderService orderService,
                       KitchenService kitchenService,
                       DeliveryService deliveryService,
                       AccountingService accountingService) {
    this.orderService = orderService;
    this.kitchenService = kitchenService;
    this.deliveryService = deliveryService;
    this.accountingService = accountingService;
  }

  public Mono<ServerResponse> getOrderDetails(ServerRequest serverRequest) {
    String orderId = serverRequest.pathVariable("orderId");

    Mono<OrderInfo> orderInfo = orderService.findOrderById(orderId);

    Mono<Optional<TicketInfo>> ticketInfo =
       kitchenService
            .findTicketByOrderId(orderId)
            .map(Optional::of)                                      1
             .onErrorReturn(Optional.empty());                      2

    Mono<Optional<DeliveryInfo>> deliveryInfo =
        deliveryService
            .findDeliveryByOrderId(orderId)
            .map(Optional::of)
            .onErrorReturn(Optional.empty());

    Mono<Optional<BillInfo>> billInfo = accountingService
            .findBillByOrderId(orderId)
            .map(Optional::of)
            .onErrorReturn(Optional.empty());

    Mono<Tuple4<OrderInfo, Optional<TicketInfo>,                    3
                 Optional<DeliveryInfo>, Optional<BillInfo>>> combined =
            Mono.when(orderInfo, ticketInfo, deliveryInfo, billInfo);

    Mono<OrderDetails> orderDetails =                               4
         combined.map(OrderDetails::makeOrderDetails);

    return orderDetails.flatMap(person -> ServerResponse.ok()       5
             .contentType(MediaType.APPLICATION_JSON)
            .body(fromObject(person)));
  }

}

  • 1 将 TicketInfo 转换为 Optional<TicketInfo>。
  • 1 Transform a TicketInfo into an Optional<TicketInfo>.
  • 2 如果服务调用失败,则返回 Optional.empty()。
  • 2 If the service invocation failed, return Optional.empty().
  • 3 将四个值合并为一个值,即 Tuple4。
  • 3 Combine the four values into a single value, a Tuple4.
  • 4 将 Tuple4 转换为 OrderDetails。
  • 4 Transform the Tuple4 into an OrderDetails.
  • 5 将 OrderDetails 转换为 ServerResponse。
  • 5 Transform the OrderDetails into a ServerResponse.

该方法实现 API 组合以获取订单详细信息。它是使用 Project Reactor 提供的抽象以可扩展的反应式风格编写的。A 是 Java 8 的一种更丰富的类型,包含异步操作的结果,该操作可以是值,也可以是异常。它有一个丰富的 API 用于转换 并组合异步操作返回的值。您可以使用 以简单易懂的样式编写并发代码。在此示例中,该方法并行调用四个服务,并将结果组合起来创建一个对象。getOrderDetails()MonoMonoCompletableFutureMonosgetOrderDetails()OrderDetails

The getOrderDetails() method implements API composition to fetch the order details. It’s written in a scalable, reactive style using the Mono abstraction, which is provided by Project Reactor. A Mono, which is a richer kind of Java 8 CompletableFuture, contains the outcome of an asynchronous operation that’s either a value or an exception. It has a rich API for transforming and combining the values returned by asynchronous operations. You can use Monos to write concurrent code in a style that’s simple and easy to understand. In this example, the getOrderDetails() method invokes the four services in parallel and combines the results to create an OrderDetails object.

该方法将 ,即 HTTP 请求的 Spring WebFlux 表示形式作为参数,并执行以下操作:getOrderDetails()ServerRequest

The getOrderDetails() method takes a ServerRequest, which is the Spring WebFlux representation of an HTTP request, as a parameter and does the following:

  1. 它从路径中提取 。orderId
  2. It extracts the orderId from the path.
  3. 它通过其代理异步调用这四个服务,这些代理返回 .为了提高可用性,将除 之外的所有服务的结果视为 optional。如果可选服务返回的 a 包含异常,则对 API 的调用会将其转换为包含空 .MonosgetOrderDetails()OrderServiceMonoonErrorReturn()MonoOptional
  4. It invokes the four services asynchronously via their proxies, which return Monos. In order to improve availability, getOrderDetails() treats the results of all services except the OrderService as optional. If a Mono returned by an optional service contains an exception, the call to onErrorReturn() transforms it into a Mono containing an empty Optional.
  5. 它使用 异步组合结果,返回包含四个值的 a。Mono.when()Mono<Tuple4>
  6. It combines the results asynchronously using Mono.when(), which returns a Mono<Tuple4> containing the four values.
  7. 它通过调用 将 转换为 。Mono<Tuple4>Mono<OrderDetails>OrderDetails::makeOrderDetails
  8. It transforms the Mono<Tuple4> into a Mono<OrderDetails> by calling OrderDetails::makeOrderDetails.
  9. 它将 转换为 ,这是 JSON/HTTP 响应的 Spring WebFlux 表示形式。OrderDetailsServerResponse
  10. It transforms the OrderDetails into a ServerResponse, which is the Spring WebFlux representation of the JSON/HTTP response.

如您所见,由于 uses ,它会并发调用服务并组合结果,而无需使用杂乱、难以阅读的回调。让我们以 查看其中一个服务代理,该代理返回包装在 .getOrderDetails()MonosMono

As you can see, because getOrderDetails() uses Monos, it concurrently invokes the services and combines the results without using messy, difficult-to-read callbacks. Let’s take a look at one of the service proxies that return the results of a service API call wrapped in a Mono.

OrderService 类

该类(如下面的清单所示)是 .它使用 ,这是 Spring WebFlux 反应式 HTTP 客户端。OrderServiceOrder ServiceOrder ServiceWebClient

The OrderService class, shown in the following listing, is a remote proxy for the Order Service. It invokes the Order Service using a WebClient, which is the Spring WebFlux reactive HTTP client.

清单 8.5. class - 远程代理OrderServiceOrder Service
@Service
public class OrderService {

  private OrderDestinations orderDestinations;

  private WebClient client;

  public OrderService(OrderDestinations orderDestinations, WebClient client)
     {
    this.orderDestinations = orderDestinations;
    this.client = client;
  }

  public Mono<OrderInfo> findOrderById(String orderId) {
    Mono<ClientResponse> response = client
            .get()
            .uri(orderDestinations.orderServiceUrl + "/orders/{orderId}",
                 orderId)
            .exchange();                                                 1
     return response.flatMap(resp -> resp.bodyToMono(OrderInfo.class));  2
   }

}
@Service
public class OrderService {

  private OrderDestinations orderDestinations;

  private WebClient client;

  public OrderService(OrderDestinations orderDestinations, WebClient client)
     {
    this.orderDestinations = orderDestinations;
    this.client = client;
  }

  public Mono<OrderInfo> findOrderById(String orderId) {
    Mono<ClientResponse> response = client
            .get()
            .uri(orderDestinations.orderServiceUrl + "/orders/{orderId}",
                 orderId)
            .exchange();                                                 1
     return response.flatMap(resp -> resp.bodyToMono(OrderInfo.class));  2
   }

}

  • 1 调用服务。
  • 1 Invoke the service.
  • 2 将响应正文转换为 OrderInfo。
  • 2 Convert the response body to an OrderInfo.

该方法检索 order 的 。它使用 向 发出 HTTP 请求,并将 JSON 响应反序列化为 . 具有响应式 API,并且响应包装在 .该方法用于将 转换为 .顾名思义,该方法将响应正文作为 .findOrder()OrderInfoWebClientOrder ServiceOrderInfoWebClientMonofindOrder()flatMap()Mono<ClientResponse>Mono<OrderInfo>bodyToMono()Mono

The findOrder() method retrieves the OrderInfo for an order. It uses the WebClient to make the HTTP request to the Order Service and deserializes the JSON response to an OrderInfo. WebClient has a reactive API, and the response is wrapped in a Mono. The findOrder() method uses flatMap() to transform the Mono<ClientResponse> into a Mono<OrderInfo>. As the name suggests, the bodyToMono() method returns the response body as a Mono.

ApiGatewayApplication 类

该类(如下面的清单所示)实现 API 网关的方法。它是一个标准的 Spring Boot 主类。ApiGatewayApplicationmain()

The ApiGatewayApplication class, shown in the following listing, implements the API gateway’s main() method. It’s a standard Spring Boot main class.

清单 8.6.API 网关的方法main()
@SpringBootConfiguration
@EnableAutoConfiguration
@EnableGateway
@Import(OrdersConfiguration.class)
public class ApiGatewayApplication {

  public static void main(String[] args) {
    SpringApplication.run(ApiGatewayApplication.class, args);
  }
}
@SpringBootConfiguration
@EnableAutoConfiguration
@EnableGateway
@Import(OrdersConfiguration.class)
public class ApiGatewayApplication {

  public static void main(String[] args) {
    SpringApplication.run(ApiGatewayApplication.class, args);
  }
}

该注释导入 Spring Cloud Gateway 框架的 Spring 配置。@EnableGateway

The @EnableGateway annotation imports the Spring configuration for the Spring Cloud Gateway framework.

Spring Cloud 网关是实现 API 网关的优秀框架。它使您能够配置基本代理 使用简单、简洁的路由规则 DSL。将请求路由到执行 API 的处理程序方法也很简单 组合和协议翻译。Spring Cloud 网关是使用可扩展的反应式 Spring Framework 5 和 Project 构建的 Reactor 框架。但是,开发您自己的 API 网关还有另一个吸引人的选择:GraphQL,这是一个提供 基于图形的查询语言。让我们看看它是如何运作的。

Spring Cloud Gateway is an excellent framework for implementing an API gateway. It enables you to configure basic proxying using a simple, concise routing rules DSL. It’s also straightforward to route requests to handler methods that perform API composition and protocol translation. Spring Cloud Gateway is built using the scalable, reactive Spring Framework 5 and Project Reactor frameworks. But there’s another appealing option for developing your own API gateway: GraphQL, a framework that provides graph-based query language. Let’s look at how that works.

8.3.3. 使用 GraphQL 实现 API 网关

8.3.3. Implementing an API gateway using GraphQL

假设您负责实现 FTGO 的 API Gateway 的终端节点,该终端节点返回订单详细信息。从表面上看,实现此端点似乎很简单。但正如 在 Section 8.1 中,此端点从多个 Services 检索数据。因此,您需要使用 API 组合模式并编写 调用服务并组合结果的代码。GET /orders/{orderId}

Imagine that you’re responsible for implementing the FTGO’s API Gateway’s GET /orders/{orderId} endpoint, which returns the order details. On the surface, implementing this endpoint might appear to be simple. But as described in section 8.1, this endpoint retrieves data from multiple services. Consequently, you need to use the API composition pattern and write code that invokes the services and combines the results.

前面提到的另一个挑战是,不同的客户端需要的数据略有不同。例如,与移动设备不同 应用程序,则桌面 SPA 应用程序会显示订单的评级。定制终端节点返回的数据的一种方法 如第 3 章所述,是让客户端能够指定他们需要的数据。例如,终端节点可以支持查询参数 例如 parameter(指定要返回的相关资源)和 parameter(指定要返回的每个资源的字段)。另一个选项是定义 this 的多个版本 endpoint 作为应用 Backends for frontends 模式的一部分。对于众多 API 端点中的一个,这需要做很多工作 FTGO 的 API 网关需要实现。expandfield

Another challenge, mentioned earlier, is that different clients need slightly different data. For example, unlike the mobile application, the desktop SPA application displays your rating for the order. One way to tailor the data returned by the endpoint, as described in chapter 3, is to give the client the ability to specify the data they need. An endpoint can, for example, support query parameters such as the expand parameter, which specifies the related resources to return, and the field parameter, which specifies the fields of each resource to return. The other option is to define multiple versions of this endpoint as part of applying the Backends for frontends pattern. This is a lot of work for just one of the many API endpoints that the FTGO’s API Gateway needs to implement.

使用能够很好地支持各种客户端的 REST API 实现 API 网关非常耗时。因此,您 可能需要考虑使用基于图形的 API 框架,例如 GraphQL,该框架旨在支持高效的数据获取。 基于图的 API 框架的关键思想是,如图 8.9 所示,服务器的 API 由基于图的模式组成。基于图形的架构定义了一组节点(类型),这些节点具有属性(字段)和与其他节点的关系。客户端通过执行指定所需数据的查询来检索数据 就图形的节点及其属性和关系而言。因此,客户端可以检索所需的数据 在 API 网关的单次往返中。

Implementing an API gateway with a REST API that supports a diverse set of clients well is time consuming. Consequently, you may want to consider using a graph-based API framework, such as GraphQL, that’s designed to support efficient data fetching. The key idea with graph-based API frameworks is that, as figure 8.9 shows, the server’s API consists of a graph-based schema. The graph-based schema defines a set of nodes (types), which have properties (fields) and relationships with other nodes. The client retrieves data by executing a query that specifies the required data in terms of the graph’s nodes and their properties and relationships. As a result, a client can retrieve the data it needs in a single round-trip to the API gateway.

图 8.9.API 网关的 API 由映射到服务的基于图形的架构组成。客户端发出一个查询,该查询检索 多个图形节点。基于图形的 API 框架通过从一个或多个服务检索数据来执行查询。

基于图形的 API 技术有几个重要的好处。它使客户端能够控制返回的数据。因此 开发一个足够灵活的 API 以支持不同的客户端变得可行。另一个好处是,即使 API 更加灵活,这种方法大大减少了开发工作。那是因为你编写了服务器端 使用旨在支持 API 组合和投影的查询执行框架编写代码。就好像,而不是 强制客户端通过您需要编写和维护的存储过程检索数据,您让它们对 底层数据库。

Graph-based API technology has a couple of important benefits. It gives clients control over what data is returned. Consequently, developing a single API that’s flexible enough to support diverse clients becomes feasible. Another benefit is that even though the API is much more flexible, this approach significantly reduces the development effort. That’s because you write the server-side code using a query execution framework that’s designed to support API composition and projections. It’s as if, rather than force clients to retrieve data via stored procedures that you need to write and maintain, you let them execute queries against the underlying database.

架构驱动的 API 技术

两种最流行的基于图形的 API 技术是 GraphQL (http://graphql.org) 和 Netflix Falcor (http://netflix.github.io/falcor/)。Netflix Falcor 将服务器端数据建模为虚拟 JSON 对象图。Falcor 客户端从 Falcor 服务器检索数据 通过执行检索该 JSON 对象的属性的查询。客户端还可以更新属性。在 Falcor 服务器中, 对象图的属性将映射到后端数据源,例如具有 REST API 的服务。服务器处理 通过调用一个或多个后端数据源来设置或获取属性的请求。

The two most popular graph-based API technologies are GraphQL (http://graphql.org) and Netflix Falcor (http://netflix.github.io/falcor/). Netflix Falcor models server-side data as a virtual JSON object graph. The Falcor client retrieves data from a Falcor server by executing a query that retrieves properties of that JSON object. The client can also update properties. In the Falcor server, the properties of the object graph are mapped to backend data sources, such as services with REST APIs. The server handles a request to set or get properties by invoking one or more backend data sources.

GraphQL 由 Facebook 开发并于 2015 年发布,是另一种流行的基于图形的 API 技术。它对服务器端进行建模 data 作为对象的图形,这些对象具有字段和对其他对象的引用。对象图映射到后端数据源。 GraphQL 客户端可以执行检索数据的查询以及创建和更新数据的更改。与 Netflix Falcor 不同,后者 是一种实现,GraphQL 是一种标准,其客户端和服务器可用于多种语言,包括 NodeJS、 Java 和 Scala 的 API 的 API 请求。

GraphQL, developed by Facebook and released in 2015, is another popular graph-based API technology. It models the server-side data as a graph of objects that have fields and references to other objects. The object graph is mapped to backend data sources. GraphQL clients can execute queries that retrieve data and mutations that create and update data. Unlike Netflix Falcor, which is an implementation, GraphQL is a standard, with clients and servers available for a variety of languages, including NodeJS, Java, and Scala.

Apollo GraphQL 是一种流行的 JavaScript/NodeJS 实现 (www.apollographql.com)。它是一个包括 GraphQL 服务器和客户端的平台。Apollo GraphQL 实现了对 GraphQL 的一些强大扩展 规范,例如将更改的数据推送到客户端的订阅。

Apollo GraphQL is a popular JavaScript/NodeJS implementation (www.apollographql.com). It’s a platform that includes a GraphQL server and client. Apollo GraphQL implements some powerful extensions to the GraphQL specification, such as subscriptions that push changed data to the client.

本节讨论如何使用 Apollo GraphQL 开发 API 网关。我只介绍其中的几个关键功能 GraphQL 和 Apollo GraphQL 的 Alpha 和 Pollo GraphQL 的 Alpha Alpha 的 GraphQL有关更多信息,您应该查阅 GraphQL 和 Apollo GraphQL 文档。

This section talks about how to develop an API gateway using Apollo GraphQL. I’m only going to cover a few of the key features of GraphQL and Apollo GraphQL. For more information, you should consult the GraphQL and Apollo GraphQL documentation.

基于 GraphQL 的 API 网关(如图 8.10 所示)是使用 NodeJS Express Web 框架和 Apollo GraphQL 服务器用 JavaScript 编写的。设计的关键部分 如下:

The GraphQL-based API gateway, shown in figure 8.10, is written in JavaScript using the NodeJS Express web framework and the Apollo GraphQL server. The key parts of the design are as follows:

  • GraphQL 架构GraphQL 架构定义服务器端数据模型及其支持的查询。
  • GraphQL schemaThe GraphQL schema defines the server-side data model and the queries it supports.
  • Resolver 函数resolve 函数将架构的元素映射到各种后端服务。
  • Resolver functionsThe resolve functions map elements of the schema to the various backend services.
  • 代理类 - 代理类调用 FTGO 应用程序的服务。
  • Proxy classesThe proxy classes invoke the FTGO application’s services.
图 8.10.基于 GraphQL 的 FTGO API 网关的设计

还有少量的胶水代码将 GraphQL 服务器与 Express Web 框架集成。让我们看看 每个部分,从 GraphQL 架构开始。

There’s also a small amount of glue code that integrates the GraphQL server with the Express web framework. Let’s look at each part, starting with the GraphQL schema.

定义 GraphQL 架构

GraphQL API 以架构为中心,架构由定义服务器端数据模型结构和操作的类型集合组成,例如 作为查询执行。GraphQL 有几种不同类型的类型。本节中的示例代码使用 只有两种类型:Object types(对象类型)和 Enum(类似于 Java 枚举)。对象类型具有名称和类型化命名字段的集合。字段可以是标量类型,例如数字、字符串或枚举;标量类型列表;对其他对象类型的引用;或 对其他对象类型的引用的集合。尽管类似于传统面向对象类的字段,但 GraphQL field 在概念上是一个返回值的函数。它可以具有参数,使 GraphQL 客户端能够定制 data 函数返回的 data 中。

A GraphQL API is centered around a schema, which consists of a collection of types that define the structure of the server-side data model and the operations, such as queries, that a client can perform. GraphQL has several different kinds of types. The example code in this section uses just two kinds of types: object types, which are the primary way of defining the data model, and enums, which are similar to Java enums. An object type has a name and a collection of typed, named fields. A field can be a scalar type, such as a number, string, or enum; a list of scalar types; a reference to another object type; or a collection of references to another object type. Despite resembling a field of a traditional object-oriented class, a GraphQL field is conceptually a function that returns a value. It can have arguments, which enable a GraphQL client to tailor the data the function returns.

GraphQL 还使用字段来定义架构支持的查询。您可以通过声明对象来定义架构的查询 type,按照惯例称为 .对象的每个字段都是一个命名查询,它具有一组可选的参数和一个返回类型。我找到了这种定义查询的方法 当我第一次遇到它时,它有点令人困惑,但请记住 GraphQL 字段是一个函数会有所帮助。它将变为 当我们查看字段如何连接到后端数据源时,会更加清晰。QueryQuery

GraphQL also uses fields to define the queries supported by the schema. You define the schema’s queries by declaring an object type, which by convention is called Query. Each field of the Query object is a named query, which has an optional set of parameters, and a return type. I found this way of defining queries a little confusing when I first encountered it, but it helps to keep in mind that a GraphQL field is a function. It will become even clearer when we look at how fields are connected to the backend data sources.

以下清单显示了基于 GraphQL 的 FTGO API 网关的部分架构。它定义了多种对象类型。最 的对象类型对应于 FTGO 应用程序的 、 和 实体。它还具有定义架构查询的对象类型。ConsumerOrderRestaurantQuery

The following listing shows part of the schema for the GraphQL-based FTGO API gateway. It defines several object types. Most of the object types correspond to the FTGO application’s Consumer, Order, and Restaurant entities. It also has a Query object type that defines the schema’s queries.

清单 8.7.FTGO API 网关的 GraphQL 架构
type Query {                               1
  orders(consumerId : Int!): [Order]
  order(orderId : Int!): Order
  consumer(consumerId : Int!): Consumer
}

type Consumer {
  id: ID                                   2
  firstName: String
  lastName: String
  orders: [Order]                          3
 }

type Order {
  orderId: ID,
  consumerId : Int,
  consumer: Consumer
  restaurant: Restaurant

  deliveryInfo : DeliveryInfo

  ...
}

type Restaurant {
  id: ID
  name: String
  ...
}

type DeliveryInfo {
  status : DeliveryStatus
  estimatedDeliveryTime : Int
  assignedCourier :String
}

enum DeliveryStatus {
  PREPARING
  READY_FOR_PICKUP
  PICKED_UP
  DELIVERED
}
type Query {                               1
  orders(consumerId : Int!): [Order]
  order(orderId : Int!): Order
  consumer(consumerId : Int!): Consumer
}

type Consumer {
  id: ID                                   2
  firstName: String
  lastName: String
  orders: [Order]                          3
 }

type Order {
  orderId: ID,
  consumerId : Int,
  consumer: Consumer
  restaurant: Restaurant

  deliveryInfo : DeliveryInfo

  ...
}

type Restaurant {
  id: ID
  name: String
  ...
}

type DeliveryInfo {
  status : DeliveryStatus
  estimatedDeliveryTime : Int
  assignedCourier :String
}

enum DeliveryStatus {
  PREPARING
  READY_FOR_PICKUP
  PICKED_UP
  DELIVERED
}

  • 1 定义客户端可以执行的查询
  • 1 Defines the queries that a client can execute
  • 2 Consumer 的唯一 ID
  • 2 The unique ID for a Consumer
  • 3 消费者有一个订单列表。
  • 3 A consumer has a list of orders.

尽管具有不同的语法,但 、 、 和 Object 类型在结构上与相应的 Java 类相似。一个区别是类型,它表示唯一标识符。ConsumerOrderRestaurantDeliveryInfoID

Despite having a different syntax, the Consumer, Order, Restaurant, and DeliveryInfo object types are structurally similar to the corresponding Java classes. One difference is the ID type, which represents a unique identifier.

此架构定义了三个查询:

This schema defines three queries:

  • orders()返回指定OrdersConsumer
  • orders()Returns the Orders for the specified Consumer
  • order()返回指定的Order
  • order()Returns the specified Order
  • consumer()返回指定的Consumer
  • consumer()Returns the specified Consumer

这些查询可能看起来与等效的 REST 端点没有什么不同,但 GraphQL 为客户端提供了对 返回的数据。要了解原因,让我们看看客户端如何执行 GraphQL 查询。

These queries may seem not different from the equivalent REST endpoints, but GraphQL gives the client tremendous control over the data that’s returned. To understand why, let’s look at how a client executes GraphQL queries.

执行 GraphQL 查询

使用 GraphQL 的主要好处是,它的查询语言使客户端能够对返回的数据进行难以置信的控制。 客户端通过向服务器发出包含查询文档的请求来执行查询。在简单示例中,查询文档 指定要返回的 Result 对象的查询名称、参数值和字段。下面是一个检索具有特定 ID 的使用者的简单查询:firstNamelastName

The principal benefit of using GraphQL is that its query language gives the client incredible control over the returned data. A client executes a query by making a request containing a query document to the server. In the simple case, a query document specifies the name of the query, the argument values, and the fields of the result object to return. Here’s a simple query that retrieves firstName and lastName of the consumer with a particular ID:

query {
  consumer(consumerId:1)       1
   {                           2
     firstName
    lastName
  }
}
query {
  consumer(consumerId:1)       1
   {                           2
     firstName
    lastName
  }
}

  • 1 指定名为 consumer 的查询,该查询获取 consumer
  • 1 Specifies the query called consumer, which fetches a consumer
  • 2 要返回的 Consumer 的字段
  • 2 The fields of the Consumer to return

此查询返回指定 .Consumer

This query returns those fields of the specified Consumer.

下面是一个更详细的查询,它返回使用者、他们的订单以及每个订单的餐厅的 ID 和名称:

Here’s a more elaborate query that returns a consumer, their orders, and the ID and name of each order’s restaurant:

query {
    consumer(consumerId:1)  {
      id
      firstName
      lastName
      orders {
        orderId
        restaurant {
          id
          name
        }
        deliveryInfo {
          estimatedDeliveryTime
          name
        }
      }
  }
}
query {
    consumer(consumerId:1)  {
      id
      firstName
      lastName
      orders {
        orderId
        restaurant {
          id
          name
        }
        deliveryInfo {
          estimatedDeliveryTime
          name
        }
      }
  }
}

此查询指示服务器返回的不仅仅是 .它检索 consumer's 和 each 的餐厅。如您所见,GraphQL 客户端可以准确指定要返回的数据,包括传递 related 对象。ConsumerOrdersOrder

This query tells the server to return more than just the fields of the Consumer. It retrieves the consumer’s Orders and each Order’s restaurant. As you can see, a GraphQL client can specify exactly the data to return, including the fields of transitively related objects.

查询语言比最初看起来更灵活。这是因为 query 是对象的一个字段,而 query document 指定服务器应该返回哪些字段。这些简单的示例检索单个 字段,但查询文档可以通过指定多个字段来执行多个查询。对于每个字段,查询文档 提供字段的参数并指定它感兴趣的 Result 对象的字段。下面是一个检索 两种不同的使用者:Query

The query language is more flexible than it might first appear. That’s because a query is a field of the Query object, and a query document specifies which of those fields the server should return. These simple examples retrieve a single field, but a query document can execute multiple queries by specifying multiple fields. For each field, the query document supplies the field’s arguments and specifies what fields of the result object it’s interested in. Here’s a query that retrieves two different consumers:

query {
  c1: consumer (consumerId:1)  { id, firstName, lastName}
  c2: consumer (consumerId:2)  { id, firstName, lastName}
}
query {
  c1: consumer (consumerId:1)  { id, firstName, lastName}
  c2: consumer (consumerId:2)  { id, firstName, lastName}
}

在此查询文档中,和 是 GraphQL 所说的别名。它们用于区分结果中的两者,否则两者都称为 .此示例检索两个相同类型的对象,但客户端可以检索多个不同类型的对象。c1c2Consumersconsumer

In this query document, c1 and c2 are what GraphQL calls aliases. They’re used to distinguish between the two Consumers in the result, which would otherwise both be called consumer. This example retrieves two objects of the same type, but a client could retrieve several objects of different types.

GraphQL 架构定义数据的形状和支持的查询。为了有用,它必须连接到源 的数据。让我们看看如何做到这一点。

A GraphQL schema defines the shape of the data and the supported queries. To be useful, it has to be connected to the source of the data. Let’s look at how to do that.

将 Schema 连接到数据

当 GraphQL 服务器执行查询时,它必须从一个或多个数据存储中检索请求的数据。在 FTGO 应用程序,GraphQL 服务器必须调用拥有数据的服务的 API。您关联 GraphQL 架构 通过将解析程序函数附加到架构定义的对象类型的字段来使用数据源。The GraphQL server 通过调用解析器函数来检索数据(首先是顶级的 query,然后递归地获取 result 对象或对象的字段。

When the GraphQL server executes a query, it must retrieve the requested data from one or more data stores. In the case of the FTGO application, the GraphQL server must invoke the APIs of the services that own the data. You associate a GraphQL schema with the data sources by attaching resolver functions to the fields of the object types defined by the schema. The GraphQL server implements the API composition pattern by invoking resolver functions to retrieve the data, first for the top-level query, and then recursively for the fields of the result object or objects.

解析程序函数如何与架构关联的详细信息取决于您使用的 GraphQL 服务器。清单 8.8 展示了在使用 Apollo GraphQL 服务器时如何定义解析器。您创建一个双重嵌套的 JavaScript 对象。每 top-level 属性对应于对象类型,如 和 。每个秒级属性(如 )都定义字段的解析程序函数。QueryOrderOrder.consumer

The details of how resolver functions are associated with the schema depend on which GraphQL server you are using. Listing 8.8 shows how to define the resolvers when using the Apollo GraphQL server. You create a doubly nested JavaScript object. Each top-level property corresponds to an object type, such as Query and Order. Each second-level property, such as Order.consumer, defines a field’s resolver function.

清单 8.8.将解析程序函数附加到 GraphQL 架构的字段
const resolvers = {
  Query: {
    orders: resolveOrders,                 1
    consumer: resolveConsumer,
    order: resolveOrder
  },
  Order: {
    consumer: resolveOrderConsumer,        2
    restaurant: resolveOrderRestaurant,
    deliveryInfo: resolveOrderDeliveryInfo
...
};
const resolvers = {
  Query: {
    orders: resolveOrders,                 1
    consumer: resolveConsumer,
    order: resolveOrder
  },
  Order: {
    consumer: resolveOrderConsumer,        2
    restaurant: resolveOrderRestaurant,
    deliveryInfo: resolveOrderDeliveryInfo
...
};

  • 1 orders 查询的解析程序
  • 1 The resolver for the orders query
  • 2 Order 的 consumer 字段的解析器
  • 2 The resolver for the consumer field of an Order

解析程序函数有三个参数:

A resolver function has three parameters:

  • 对象 - 对于顶级查询字段(如 ),是解析程序函数通常忽略的根对象。否则,是解析程序为父对象返回的值。例如,字段的 resolver 函数将传递 的 resolver 函数返回的值。resolveOrdersobjectobjectOrder.consumerOrder
  • ObjectFor a top-level query field, such as resolveOrders, object is a root object that’s usually ignored by the resolver function. Otherwise, object is the value returned by the resolver for the parent object. For example, the resolver function for the Order.consumer field is passed the value returned by the Order’s resolver function.
  • 查询参数 - 这些由 query document 提供。
  • Query argumentsThese are supplied by the query document.
  • 背景 - 所有解析程序都可以访问的查询执行的全局状态。例如,它用于传递用户信息和 依赖项。
  • ContextGlobal state of the query execution that’s accessible by all resolvers. It’s used, for example, to pass user information and dependencies to the resolvers.

解析器函数可以调用单个服务,也可以实现 API 组合模式并从 多个服务。Apollo GraphQL 服务器解析器函数返回一个 ,这是 JavaScript 的 Java 版本的 .Promise 包含 resolver 函数从数据存储中检索的对象(或对象列表)。图形QL engine 在 Result 对象中包含返回值。PromiseCompletableFuture

A resolver function might invoke a single service or it might implement the API composition pattern and retrieve data from multiple services. An Apollo GraphQL server resolver function returns a Promise, which is JavaScript’s version of Java’s CompletableFuture. The promise contains the object (or a list of objects) that the resolver function retrieved from the data store. GraphQL engine includes the return value in the result object.

让我们看几个例子。下面是函数,它是查询的解析程序:resolveOrders()orders

Let’s look at a couple of examples. Here’s the resolveOrders() function, which is the resolver for the orders query:

function resolveOrders(_, { consumerId }, context) {
  return context.orderServiceProxy.findOrders(consumerId);
}
function resolveOrders(_, { consumerId }, context) {
  return context.orderServiceProxy.findOrders(consumerId);
}

此函数从 中获取 并调用它来获取消费者的订单。它会忽略它的第一个参数。它将查询文档提供的参数传递给 .该方法从 中检索消费者的订单。OrderServiceProxycontextconsumerIdOrderServiceProxy.findOrders()findOrders()OrderHistoryService

This function obtains the OrderServiceProxy from the context and invokes it to fetch a consumer’s orders. It ignores its first parameter. It passes the consumerId argument, provided by the query document, to OrderServiceProxy.findOrders(). The findOrders() method retrieves the consumer’s orders from OrderHistoryService.

下面是函数,它是检索订单的 restaurant 的字段的解析程序:resolveOrderRestaurant()Order.restaurant

Here’s the resolveOrderRestaurant() function, which is the resolver for the Order.restaurant field that retrieves an order’s restaurant:

function resolveOrderRestaurant({restaurantId}, args, context) {
    return context.restaurantServiceProxy.findRestaurant(restaurantId);
}
function resolveOrderRestaurant({restaurantId}, args, context) {
    return context.restaurantServiceProxy.findRestaurant(restaurantId);
}

它的第一个参数是 。它使用 的 调用 ,该 由 提供。OrderRestaurantServiceProxy.findRestaurant()OrderrestaurantIdresolveOrders()

Its first parameter is Order. It invokes RestaurantServiceProxy.findRestaurant() with the Order’s restaurantId, which was provided by resolveOrders().

GraphQL 使用递归算法来执行解析程序函数。首先,它执行顶级 query 指定的 Query 文档。接下来,对于查询返回的每个对象,它会遍历指定的字段 在 Query 文档中。如果字段具有解析程序,则它使用对象和 Query 中的参数调用解析程序 公文。然后,它会在该解析程序返回的一个或多个对象上递归。

GraphQL uses a recursive algorithm to execute the resolver functions. First, it executes the resolver function for the top-level query specified by the Query document. Next, for each object returned by the query, it iterates through the fields specified in the Query document. If a field has a resolver, it invokes the resolver with the object and the arguments from the Query document. It then recurses on the object or objects returned by that resolver.

图 8.11 显示了此算法如何执行检索消费者的订单和每个订单的交付信息的查询,以及 餐厅。首先,GraphQL 引擎调用 ,该引擎检索 .接下来,它调用 ,这是返回使用者订单的字段的解析程序。然后,GraphQL 引擎遍历 ,调用 和 字段的解析程序。resolveConsumer()ConsumerresolveConsumerOrders()Consumer.ordersOrdersOrder.restaurantOrder.deliveryInfo

Figure 8.11 shows how this algorithm executes the query that retrieves a consumer’s orders and each order’s delivery information and restaurant. First, the GraphQL engine invokes resolveConsumer(), which retrieves Consumer. Next, it invokes resolveConsumerOrders(), which is the resolver for the Consumer.orders field that returns the consumer’s orders. The GraphQL engine then iterates through Orders, invoking the resolvers for the Order.restaurant and Order.deliveryInfo fields.

图 8.11.GraphQL 通过递归调用查询文档中指定的字段的解析程序函数来执行查询。第一 它执行查询的解析程序,然后递归地调用 Result 对象中字段的解析程序 等级制度。

执行解析程序的结果是一个对象,其中填充了从多个服务检索的数据。Consumer

The result of executing the resolvers is a Consumer object populated with data retrieved from multiple services.

现在让我们看看如何使用批处理和缓存来优化解析程序的执行。

Let’s now look at how to optimize the executing of resolvers by using batching and caching.

使用批处理和缓存优化加载

GraphQL 在执行查询时可能会执行大量解析程序。由于 GraphQL 服务器会执行每个 resolver 独立,则存在由于与服务之间的过多往返而导致性能不佳的风险。例如,考虑一下 检索使用者、他们的订单和订单的餐厅的查询。如果有 N 个订单,则简单的实现将对 进行一次调用,对 进行一次调用,然后对 进行 N 次调用。尽管 GraphQL 引擎通常会并行调用,但存在性能不佳的风险。幸运的是,您可以使用一些技术来提高性能。Consumer ServiceOrder History ServiceRestaurant ServiceRestaurant Service

GraphQL can potentially execute a large number of resolvers when executing a query. Because the GraphQL server executes each resolver independently, there’s a risk of poor performance due to excessive round-trips to the services. Consider, for example, a query that retrieves a consumer, their orders, and the orders’ restaurants. If there are N orders, then a simplistic implementation would make one call to Consumer Service, one call to Order History Service, and then N calls to Restaurant Service. Even though the GraphQL engine will typically make the calls to Restaurant Service in parallel, there’s a risk of poor performance. Fortunately, you can use a few techniques to improve performance.

一项重要的优化是将服务器端批处理和缓存结合使用。批处理将对服务的 N 次调用(如 )转换为检索一批 N 个对象的单个调用。缓存会重复使用同一对象的先前提取结果,以避免进行不必要的重复调用。的组合 批处理和缓存显著减少了到后端服务的往返次数。Restaurant Service

One important optimization is to use a combination of server-side batching and caching. Batching turns N calls to a service, such as Restaurant Service, into a single call that retrieves a batch of N objects. Caching reuses the result of a previous fetch of the same object to avoid making an unnecessary duplicate call. The combination of batching and caching significantly reduces the number of round-trips to backend services.

基于 NodeJS 的 GraphQL 服务器可以使用该模块实现批处理和缓存 (https://github.com/facebook/dataloader)。它合并在事件循环的单次执行中发生的负载,并调用您提供的批量加载函数。它还会缓存调用以消除重复加载。以下清单显示了如何使用 .该方法加载一个 via .DataLoaderRestaurantServiceProxyDataLoaderfindRestaurant()RestaurantDataLoader

A NodeJS-based GraphQL server can use the DataLoader module to implement batching and caching (https://github.com/facebook/dataloader). It coalesces loads that occur within a single execution of the event loop and calls a batch loading function that you provide. It also caches calls to eliminate duplicate loads. The following listing shows how RestaurantServiceProxy can use DataLoader. The findRestaurant() method loads a Restaurant via DataLoader.

清单 8.9.使用 a 优化对DataLoaderRestaurant Service
const DataLoader = require('dataloader');

class RestaurantServiceProxy {
    constructor() {
        this.dataLoader =                                 1
            new DataLoader(restaurantIds =>
             this.batchFindRestaurants(restaurantIds));
    }

    findRestaurant(restaurantId) {                        2
         return this.dataLoader.load(restaurantId);
    }

    batchFindRestaurants(restaurantIds) {                 3
       ...
    }
}
const DataLoader = require('dataloader');

class RestaurantServiceProxy {
    constructor() {
        this.dataLoader =                                 1
            new DataLoader(restaurantIds =>
             this.batchFindRestaurants(restaurantIds));
    }

    findRestaurant(restaurantId) {                        2
         return this.dataLoader.load(restaurantId);
    }

    batchFindRestaurants(restaurantIds) {                 3
       ...
    }
}

  • 1 创建一个 DataLoader,它使用 batchFindRestaurants() 作为批量加载函数。
  • 1 Create a DataLoader, which uses batchFindRestaurants() as the batch loading functions.
  • 2 通过 DataLoader 加载指定的餐厅。
  • 2 Load the specified Restaurant via the DataLoader.
  • 3 加载一批 Restaurants。
  • 3 Load a batch of Restaurants.

RestaurantServiceProxy因此,是为每个请求创建的,因此不可能将不同用户的数据混合在一起。DataLoaderDataLoader

RestaurantServiceProxy and, hence, DataLoader are created for each request, so there’s no possibility of DataLoader mixing together different users’ data.

现在,让我们看看如何将 GraphQL 引擎与 Web 框架集成,以便客户端可以调用它。

Let’s now look at how to integrate the GraphQL engine with a web framework so that it can be invoked by clients.

将 Apollo GraphQL 服务器与 Express 集成

Apollo GraphQL 服务器执行 GraphQL 查询。为了让客户端调用它,您需要将其与 Web 集成 框架。Apollo GraphQL 服务器支持多种 Web 框架,包括 Express,一种流行的 NodeJS Web 框架。

The Apollo GraphQL server executes GraphQL queries. In order for clients to invoke it, you need to integrate it with a web framework. Apollo GraphQL server supports several web frameworks, including Express, a popular NodeJS web framework.

清单 8.10 展示了如何在 Express 应用程序中使用 Apollo GraphQL 服务器。关键功能是 ,它由模块提供。它构建一个 Express 请求处理程序,用于针对架构执行 GraphQL 查询。此示例配置 Express 将请求路由到此 GraphQL 请求处理程序的 和 终端节点。它还会创建一个包含代理的 GraphQL 上下文,使代理可用 到解析程序。graphqlExpressapollo-server-expressGET /graphqlPOST /graphql

Listing 8.10 shows how to use the Apollo GraphQL server in an Express application. The key function is graphqlExpress, which is provided by the apollo-server-express module. It builds an Express request handler that executes GraphQL queries against a schema. This example configures Express to route requests to the GET /graphql and POST /graphql endpoints of this GraphQL request handler. It also creates a GraphQL context containing the proxies, which makes them available to the resolvers.

清单 8.10.将 GraphQL 服务器与 Express Web 框架集成
const {graphqlExpress} = require("apollo-server-express");

const typeDefs = gql`                                                  1
   type Query {
    orders: resolveOrders,
   ...
  }

  type Consumer {
   ...

const resolvers = {                                                    2
   Query: {
  ...
  }
}

const schema = makeExecutableSchema({ typeDefs, resolvers });          3

const app = express();

function makeContextWithDependencies(req) {                            4
    const orderServiceProxy = new OrderServiceProxy();
    const consumerServiceProxy = new ConsumerServiceProxy();
    const restaurantServiceProxy = new RestaurantServiceProxy();
    ...
    return {orderServiceProxy, consumerServiceProxy,
              restaurantServiceProxy, ...};
}

function makeGraphQLHandler() {                                        5
     return graphqlExpress(req => {
        return {schema: schema, context: makeContextWithDependencies(req)}
    });
}

app.post('/graphql', bodyParser.json(), makeGraphQLHandler());         6

app.get('/graphql', makeGraphQLHandler());

app.listen(PORT);
const {graphqlExpress} = require("apollo-server-express");

const typeDefs = gql`                                                  1
   type Query {
    orders: resolveOrders,
   ...
  }

  type Consumer {
   ...

const resolvers = {                                                    2
   Query: {
  ...
  }
}

const schema = makeExecutableSchema({ typeDefs, resolvers });          3

const app = express();

function makeContextWithDependencies(req) {                            4
    const orderServiceProxy = new OrderServiceProxy();
    const consumerServiceProxy = new ConsumerServiceProxy();
    const restaurantServiceProxy = new RestaurantServiceProxy();
    ...
    return {orderServiceProxy, consumerServiceProxy,
              restaurantServiceProxy, ...};
}

function makeGraphQLHandler() {                                        5
     return graphqlExpress(req => {
        return {schema: schema, context: makeContextWithDependencies(req)}
    });
}

app.post('/graphql', bodyParser.json(), makeGraphQLHandler());         6

app.get('/graphql', makeGraphQLHandler());

app.listen(PORT);

  • 1 定义 GraphQL 架构。
  • 1 Define the GraphQL schema.
  • 2 定义解析程序。
  • 2 Define the resolvers.
  • 3 将 Schema 与解析程序相结合,以创建可执行 Schema。
  • 3 Combine the schema with the resolvers to create an executable schema.
  • 4 将存储库注入到上下文中,以便解析程序可以使用它们。
  • 4 Inject repositories into the context so they’re available to resolvers.
  • 5 创建一个快速请求处理程序,用于针对可执行架构执行 GraphQL 查询。
  • 5 Make an express request handler that executes GraphQL queries against the executable schema.
  • 6 将 POST /graphql 和 GET /graphql 终端节点路由到 GraphQL 服务器。
  • 6 Route POST /graphql and GET /graphql endpoints to the GraphQL server.

此示例不处理安全性等问题,但这些问题很容易实现。API 网关可以: 例如,使用 Passport 对用户进行身份验证,Passport 是第 11 章中描述的 NodeJS 安全框架。该函数会将用户信息传递给每个存储库的构造函数,以便它们可以传播用户信息 到服务。makeContextWithDependencies()

This example doesn’t handle concerns such as security, but those would be straightforward to implement. The API gateway could, for example, authenticate users using Passport, a NodeJS security framework described in chapter 11. The makeContextWithDependencies() function would pass the user information to each repository’s constructor so that they can propagate the user information to the services.

现在,让我们看看客户端如何调用此服务器来执行 GraphQL 查询。

Let’s now look at how a client can invoke this server to execute GraphQL queries.

编写 GraphQL 客户端

客户端应用程序可以通过几种不同的方式调用 GraphQL 服务器。由于 GraphQL 服务器具有 基于 HTTP 的 API,客户端应用程序可以使用 HTTP 库发出请求,例如 .不过,使用 GraphQL 客户端库更容易,它负责正确格式化请求,并且通常提供 客户端缓存等功能。GET http://localhost:3000/graphql?query={orders(consumerId:1){orderId,restaurant{id}}}'

There are a couple of different ways a client application can invoke the GraphQL server. Because the GraphQL server has an HTTP-based API, a client application could use an HTTP library to make requests, such as GET http://localhost:3000/graphql?query={orders(consumerId:1){orderId,restaurant{id}}}'. It’s easier, though, to use a GraphQL client library, which takes care of properly formatting requests and typically provides features such as client-side caching.

下面的清单显示了该类,它是 FTGO 应用程序的简单基于 GraphQL 的客户端。其构造函数实例化 ,由 Apollo GraphQL 客户端库提供。该类定义了一个使用客户端检索使用者名称的方法。FtgoGraphQLClientApolloClientFtgoGraphQLClientfindConsumer()

The following listing shows the FtgoGraphQLClient class, which is a simple GraphQL-based client for the FTGO application. Its constructor instantiates ApolloClient, which is provided by the Apollo GraphQL client library. The FtgoGraphQLClient class defines a findConsumer() method that uses the client to retrieve the name of a consumer.

清单 8.11.使用 Apollo GraphQL 客户端执行查询
class FtgoGraphQLClient {

    constructor(...) {
        this.client = new ApolloClient({ ... });
    }

    findConsumer(consumerId) {
        return this.client.query({
            variables: { cid: consumerId},             1
             query: gql`
              query foo($cid : Int!) {                 2
                 consumer(consumerId: $cid)  {         3
                     id
                    firstName
                    lastName
                }
            } `,
        })
    }

}
class FtgoGraphQLClient {

    constructor(...) {
        this.client = new ApolloClient({ ... });
    }

    findConsumer(consumerId) {
        return this.client.query({
            variables: { cid: consumerId},             1
             query: gql`
              query foo($cid : Int!) {                 2
                 consumer(consumerId: $cid)  {         3
                     id
                    firstName
                    lastName
                }
            } `,
        })
    }

}

  • 1 提供 $cid 的值。
  • 1 Supply the value of the $cid.
  • 2 将 $cid 定义为 Int 类型的变量。
  • 2 Define $cid as a variable of type Int.
  • 3 将查询参数 consumerid 的值设置为 $cid。
  • 3 Set the value of query parameter consumerid to $cid.

该类可以定义多种查询方法,例如 .每个 API 都执行一个查询,该查询将准确检索客户端所需的数据。FtgoGraphQLClientfindConsumer()

The FtgoGraphQLClient class can define a variety of query methods, such as findConsumer(). Each one executes a query that retrieves exactly the data needed by the client.

本节仅触及了 GraphQL 功能的皮毛。我希望我已经证明了 GraphQL 是一个非常有吸引力的 替代更传统的基于 REST 的 API 网关。它允许您实现足够灵活的 API 以支持 各种各样的客户。因此,您应该考虑使用 GraphQL 来实施 API 网关。

This section has barely scratched the surface of GraphQL’s capabilities. I hope I’ve demonstrated that GraphQL is a very appealing alternative to a more traditional, REST-based API gateway. It lets you implement an API that’s flexible enough to support a diverse set of clients. Consequently, you should consider using GraphQL to implement your API gateway.

总结

Summary

  • 应用程序的外部客户端通常通过 API 网关访问应用程序的服务。API 网关提供 每个客户端都有一个自定义 API。它负责请求路由、API 组合、协议转换和实现 的边缘功能,例如身份验证。
  • Your application’s external clients usually access the application’s services via an API gateway. An API gateway provides each client with a custom API. It’s responsible for request routing, API composition, protocol translation, and implementation of edge functions such as authentication.
  • 您的应用程序可以具有单个 API 网关,也可以使用 Backends for frontends 模式,该模式定义了 API 网关 对于每种类型的客户端。Backends for frontends 模式的主要优点是它为客户端团队提供了更大的 autonomy,因为他们开发、部署和运营自己的 API 网关。
  • Your application can have a single API gateway or it can use the Backends for frontends pattern, which defines an API gateway for each type of client. The main advantage of the Backends for frontends pattern is that it gives the client teams greater autonomy, because they develop, deploy, and operate their own API gateway.
  • 您可以使用许多技术来实施 API 网关,包括现成的 API 网关产品。或者 您可以使用框架开发自己的 API 网关。
  • There are numerous technologies you can use to implement an API gateway, including off-the-shelf API gateway products. Alternatively, you can develop your own API gateway using a framework.
  • Spring Cloud Gateway 是一个用于开发 API 网关的良好、易于使用的框架。它使用任何请求来路由请求 属性,包括方法和路径。Spring Cloud 网关可以将请求直接路由到后端服务 或自定义处理程序方法。它是使用可扩展的反应式 Spring Framework 5 和 Project Reactor 框架构建的。 例如,您可以使用 Project Reactor 的抽象以反应式样式编写自定义请求处理程序。Mono
  • Spring Cloud Gateway is a good, easy-to-use framework for developing an API gateway. It routes requests using any request attribute, including the method and the path. Spring Cloud Gateway can route a request either directly to a backend service or to a custom handler method. It’s built using the scalable, reactive Spring Framework 5 and Project Reactor frameworks. You can write your custom request handlers in a reactive style using, for example, Project Reactor’s Mono abstraction.
  • GraphQL 是一个提供基于图形的查询语言的框架,是开发 API Gateway 的另一个优秀基础。 您可以编写面向图形的架构来描述服务器端数据模型及其支持的查询。然后,映射该架构 通过编写检索数据的解析程序添加到您的服务中。基于 GraphQL 的客户端对架构执行查询,该架构 准确指定服务器应返回的数据。因此,基于 GraphQL 的 API 网关可以支持不同的客户端。
  • GraphQL, a framework that provides graph-based query language, is another excellent foundation for developing an API Gateway. You write a graph-oriented schema to describe the server-side data model and its supported queries. You then map that schema to your services by writing resolvers, which retrieve data. GraphQL-based clients execute queries against the schema that specify exactly the data that the server should return. As a result, a GraphQL-based API gateway can support diverse clients.

第 9 章.测试微服务:第 1 部分

Chapter 9. Testing microservices: Part 1

本章涵盖

This chapter covers

  • 微服务的有效测试策略
  • Effective testing strategies for microservices
  • 使用 mock 和 stub 隔离地测试软件元素
  • Using mocks and stubs to test a software element in isolation
  • 使用测试金字塔确定测试工作的重点
  • Using the test pyramid to determine where to focus testing efforts
  • 对服务中的类进行单元测试
  • Unit testing the classes inside a service

与许多组织一样,FTGO 采用了传统的测试方法。测试主要是在开发之后进行的一项活动。FTGO 开发人员将他们的代码翻墙扔给了 QA 团队, 验证软件是否按预期工作。更重要的是,他们的大部分测试都是手动完成的。遗憾的是,这种方法 to testing 被破坏 - 有两个原因:

FTGO, like many organizations, had adopted a traditional approach to testing. Testing is primarily an activity that happens after development. The FTGO developers throw their code over a wall to the QA team, who verify that the software works as expected. What’s more, most of their testing is done manually. Sadly, this approach to testing is broken—for two reasons:

  • 手动测试效率极低你永远不应该要求人类去做机器可以做得更好的事情。与机器相比,人类速度较慢,无法 24/7 全天候工作。 如果您依赖手动测试,您将无法快速安全地交付软件。自动化编写至关重要 测试。
  • Manual testing is extremely inefficientYou should never ask a human to do what a machine can do better. Compared to machines, humans are slow and can’t work 24/7. You won’t be able to deliver software rapidly and safely if you rely on manual testing. It’s essential that you write automated tests.
  • 在交付过程中,测试完成得太晚了 — 测试当然可以发挥其作用,在应用程序编写完成后对其进行批评,但经验表明,这些 测试是不够的。更好的方法是让开发人员编写自动化测试作为开发的一部分。它提高了他们的工作效率,因为,例如,他们将进行以下测试 在编辑代码时提供即时反馈。
  • Testing is done far too late in the delivery processThere certainly is a role for tests that critique an application after it’s been written, but experience has shown that those tests are insufficient. A much better approach is for developers to write automated tests as part of development. It improves their productivity because, for example, they’ll have tests that provide immediate feedback while editing code.

在这方面,FTGO 是一个相当典型的组织。Sauce Labs 的 2018 年测试趋势报告描绘了一个相当悲观的时期 测试自动化状态图片 (https://saucelabs.com/resources/white-papers/testing-trends-for-2018)。它描述了只有 26% 的组织大部分是自动化的,而只有极少数 3% 的组织是完全自动化的!

In this regard, FTGO is a fairly typical organization. The Sauce Labs Testing Trends in 2018 report paints a fairly gloomy picture of the state of test automation (https://saucelabs.com/resources/white-papers/testing-trends-for-2018). It describes how only 26% of organizations are mostly automated, and a minuscule 3% are fully automated!

对手动测试的依赖并不是因为缺乏工具和框架。例如,JUnit 是一种流行的 Java 测试 框架于 1998 年首次发布。缺乏自动化测试的原因主要是文化上的:“测试是 QA 的工作。 “这不是对开发人员时间的最佳利用”,等等。开发一个快速运行但有效的 可维护的测试套件具有挑战性。而且,典型的大型整体式应用程序非常难以测试。

The reliance on manual testing isn’t because of a lack of tooling and frameworks. For example, JUnit, a popular Java testing framework, was first released in 1998. The reason for the lack of automated tests is mostly cultural: “Testing is QA’s job,” “It’s not the best use of a developers’s time,” and so on. It also doesn’t help that developing a fast-running, yet effective, maintainable test suite is challenging. And, a typical large, monolithic application is extremely difficult to test.

第 2 章所述,使用微服务架构的一个关键动机是提高可测试性。但与此同时,微服务架构的复杂性要求您编写自动化 测试。此外,测试微服务的某些方面也具有挑战性。这是因为我们需要验证服务 可以正确交互,同时最大限度地减少启动许多服务的缓慢、复杂和不可靠的端到端测试的数量。

One key motivation for using the microservice architecture is, as described in chapter 2, improving testability. Yet at the same time, the complexity of the microservice architecture demands that you write automated tests. Furthermore, some aspects of testing microservices are challenging. That’s because we need to verify that services can interact correctly while minimizing the number of slow, complex, and unreliable end-to-end-tests that launch many services.

本章是关于测试的两章中的第一章。这是对测试的介绍。第 10 章介绍了更高级的测试概念。这两章很长,但它们一起涵盖了测试思路和技术 对于一般的现代软件开发,特别是微服务架构来说,是必不可少的。

This chapter is the first of two chapters on testing. It’s an introduction to testing. Chapter 10 covers more advanced testing concepts. The two chapters are long, but together they cover testing ideas and techniques that are essential to modern software development in general, and to the microservice architecture in particular.

本章开始时,我将介绍基于微服务的应用程序的有效测试策略。这些策略使 您可以确信您的软件可以正常工作,同时最大限度地降低测试复杂性和执行时间。之后,我描述 如何为您的服务编写一种特定类型的测试:单元测试。第 10 章涵盖了其他类型的测试:集成、组件和端到端。

I begin this chapter by describing effective testing strategies for a microservices-based application. These strategies enable you to be confident that your software works, while minimizing test complexity and execution time. After that, I describe how to write one particular kind of test for your services: unit tests. Chapter 10 covers the other kinds of tests: integration, component, and end-to-end.

我们首先看一下微服务的测试策略。

Let’s start by taking a look at testing strategies for microservices.

为什么要介绍测试?

您可能想知道为什么本章包括对基本测试概念的介绍。如果您已经熟悉 测试金字塔和不同类型的测试等概念,请随意快速阅读本章并转到 next one,重点介绍特定于微服务的测试主题。但根据我的咨询和培训经验 客户遍布世界各地,许多软件开发组织的一个基本弱点是缺乏自动化测试。 这是因为如果您想快速可靠地交付软件,进行自动化测试绝对是必不可少的。这是获得较短交付时间的唯一方法,即将提交的代码投入生产所需的时间。也许更重要的是,自动化测试是必不可少的 因为它会迫使您开发可测试的应用程序。通常很难将自动化测试引入 一个已经很大、很复杂的应用程序。换句话说,通往整体地狱的快速通道是不编写自动化测试。

You may be wondering why this chapter includes an introduction to basic testing concepts. If you’re already familiar with concepts such as the test pyramid and the different types of tests, feel free to speed-read this chapter and move onto the next one, which focuses on microservices-specific testing topics. But based on my experiences consulting for and training clients all over the world, a fundamental weakness of many software development organizations is the lack of automated testing. That’s because if you want to deliver software quickly and reliably, it’s absolutely essential to do automated testing. It’s the only way to have a short lead time, which is the time it takes to get committed code into production. Perhaps even more importantly, automated testing is essential because it forces you to develop a testable application. It’s typically very difficult to introduce automating testing into an already large, complex application. In other words, the fast track to monolithic hell is to not write automated tests.

9.1. 微服务架构的测试策略

9.1. Testing strategies for microservice architectures

假设您对 FTGO 应用程序的 .当然,下一步是运行代码并验证更改是否正常工作。一种选择是测试 手动更改。首先,您运行 及其所有依赖项,其中包括基础设施服务,例如数据库和其他应用程序服务。那么你 通过调用其 API 或使用 FTGO 应用程序的 UI 来“测试”服务。这种方法的缺点是它是 一种缓慢的手动代码测试方法。Order ServiceOrder Service

Let’s say you’ve made a change to FTGO application’s Order Service. Naturally, the next step is for you to run your code and verify that the change works correctly. One option is to test the change manually. First, you run Order Service and all its dependencies, which include infrastructure services such as a database and other application services. Then you “test” the service by either invoking its API or using the FTGO application’s UI. The downside of this approach is that it’s a slow, manual way to test your code.

更好的选择是拥有可在开发期间运行的自动化测试。您的开发工作流程应该是: 编辑代码,运行测试(最好通过一次击键),重复。快速运行的测试会快速告诉您您的更改是否 在几秒钟内工作。但是如何编写快速运行的测试呢?它们是否足够,还是您需要更全面 测试?这些是我在本章和其他部分中回答的问题。

A much better option is to have automated tests that you can run during development. Your development workflow should be: edit code, run tests (ideally with a single keystroke), repeat. The fast-running tests quickly tell you whether your changes work within a few seconds. But how do you write fast-running tests? And are they sufficient or do you need more comprehensive tests? These are the kind of questions I answer in this and other sections in this chapter.

在本节中,我将概述重要的自动化测试概念。我们将看看测试的目的和 典型测试的结构。我将介绍您需要编写的不同类型的测试。我还描述了测试金字塔, 这为您应该将测试工作重点放在何处提供了宝贵的指导。在介绍测试概念之后,我讨论了 测试微服务的策略。我们将了解测试具有微服务的应用程序所面临的不同挑战 建筑。我将介绍一些技术,您可以使用这些技术来为微服务编写更简单、更快速但仍然有效的测试。

I start this section with an overview of important automated testing concepts. We’ll look at the purpose of testing and the structure of a typical test. I cover the different types of tests that you’ll need to write. I also describe the test pyramid, which provides valuable guidance about where you should focus your testing efforts. After covering testing concepts, I discuss strategies for testing microservices. We’ll look at the distinct challenges of testing applications that have a microservice architecture. I describe techniques you can use to write simpler and faster, yet still-effective, tests for your microservices.

让我们看一下测试概念。

Let’s take a look at testing concepts.

9.1.1. 测试概述

9.1.1. Overview of testing

在本章中,我的重点是自动化测试,我使用术语 test 作为 Automated test 的简写。Wikipedia 对测试用例或测试的定义如下:

In this chapter, my focus is on automated testing, and I use the term test as shorthand for automated test. Wikipedia defines a test case, or test, as follows:

测试用例是为特定目标开发的一组测试输入、执行条件和预期结果,例如 执行特定程序路径或验证是否符合特定要求。

https://en.wikipedia.org/wiki/Test_case

A test case is a set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

https://en.wikipedia.org/wiki/Test_case

换句话说,如图 9.1 所示,测试的目的是验证被测系统 (SUT) 的行为。在这个定义中,system 是一个花哨的术语,表示被测试的软件元素。它可能小到类,大到 整个应用程序,或者介于两者之间,例如类集群或单个服务。相关 测试形成一个测试套件

In other words, the purpose of a test is, as figure 9.1 shows, to verify the behavior of the System Under Test (SUT). In this definition, system is a fancy term that means the software element being tested. It might be something as small as a class, as large as the entire application, or something in between, such as a cluster of classes or an individual service. A collection of related tests form a test suite.

图 9.1.测试的目标是验证被测系统的行为。SUT 可能小到一个类,也可能大到一个类 一个完整的应用程序。

让我们首先看一下自动化测试的概念。然后,我将讨论您需要编写的不同类型的测试。 之后,我将讨论测试金字塔,它描述了您 应该写。

Let’s first look at the concept of an automated test. Then I discuss the different kinds of tests that you’ll need to write. After that, I discuss the test pyramid, which describes the relative proportions of the different types of tests that you should write.

编写自动化测试

自动化测试通常使用测试框架编写。例如,JUnit 是一个流行的 Java 测试框架。图 9.2 显示了自动测试的结构。每个测试都由一个测试方法实现,该方法属于一个测试类。

Automated tests are usually written using a testing framework. JUnit, for example, is a popular Java testing framework. Figure 9.2 shows the structure of an automated test. Each test is implemented by a test method, which belongs to a test class.

图 9.2.每个 Automated Test 都由一个 test 方法实现,该方法属于一个 test 类。测试包括四个阶段:setup,初始化测试 fixture,这是运行测试所需的一切;execute,用于调用 SUT;verify ,用于验证测试的结果;以及 teardown,用于清理测试夹具。

自动化测试通常包括四个阶段 (http://xunitpatterns.com/Four%20Phase%20Test.html):

An automated test typically consists of four phases (http://xunitpatterns.com/Four%20Phase%20Test.html):

  1. 设置 - 将由 SUT 及其依赖项组成的测试装置初始化为所需的初始状态。例如,创建 待测试的类,并将其初始化为显示所需行为所需的状态。
  2. SetupInitialize the test fixture, which consists of the SUT and its dependencies, to the desired initial state. For example, create the class under test and initialize it to the state required for it to exhibit the desired behavior.
  3. 锻炼调用 SUT,例如,在待测试的类上调用方法。
  4. ExerciseInvoke the SUT—for example, invoke a method on the class under test.
  5. 验证 - 对调用的结果和 SUT 的状态进行断言。例如,验证方法的返回值和 待测试类的新状态。
  6. VerifyMake assertions about the invocation’s outcome and the state of the SUT. For example, verify the method’s return value and the new state of the class under test.
  7. 拆卸— 如有必要,清理测试夹具。许多测试省略了此阶段,但某些类型的数据库测试将滚动 返回由 Setup 阶段启动的事务。
  8. TeardownClean up the test fixture, if necessary. Many tests omit this phase, but some types of database test will, for example, roll back a transaction initiated by the setup phase.

为了减少代码重复并简化测试,测试类可能具有在测试方法之前运行的 setup 方法。 以及之后运行的 teardown 方法。测试套件是一组测试类。测试由测试运行程序执行。

In order to reduce code duplication and simplify tests, a test class might have setup methods that are run before a test method, and teardown methods that are run afterwards. A test suite is a set of test classes. The tests are executed by a test runner.

使用 mock 和 stub 进行测试

SUT 通常具有依赖项。依赖项的问题在于它们会使测试复杂化并减慢测试速度。例如 类调用 ,它最终依赖于许多其他应用程序服务和基础结构服务。这样做是不切实际的 通过运行系统的大部分来测试类。我们需要一种方法来单独测试 SUT。OrderControllerOrderServiceOrderController

An SUT often has dependencies. The trouble with dependencies is that they can complicate and slow down tests. For example, the OrderController class invokes OrderService, which ultimately depends on numerous other application services and infrastructure services. It wouldn’t be practical to test the OrderController class by running a large portion of the system. We need a way to test an SUT in isolation.

如图 9.3 所示,解决方案是用测试替身替换 SUT 的依赖项。测试替身是模拟依赖项行为的对象。

The solution, as figure 9.3 shows, is to replace the SUT’s dependencies with test doubles. A test double is an object that simulates the behavior of the dependency.

图 9.3.通过将依赖项替换为测试替身,可以单独测试 SUT。测试更简单、更快捷。

有两种类型的测试替身:存根和模拟。术语 stubmock 通常可以互换使用,尽管它们的行为略有不同。存根是将值返回给 SUT 的测试替身。mock 是测试用来验证 SUT 是否正确调用依赖项的测试替身。此外,mock 通常是 stub。

There are two types of test doubles: stubs and mocks. The terms stubs and mocks are often used interchangeably, although they have slightly different behavior. A stub is a test double that returns values to the SUT. A mock is a test double that a test uses to verify that the SUT correctly invokes a dependency. Also, a mock is often a stub.

在本章的后面部分,您将看到 test 替身的实际示例。例如,第 9.2.5 节展示了如何通过使用类的测试替身来隔离测试类。在该示例中,测试替身是使用 Mockito 实现的,Mockito 是一种流行的 Java 模拟对象框架。第 10 章展示了如何使用测试替身来测试它调用的其他服务。这些测试替身响应 发送的命令消息。OrderControllerOrderServiceOrderServiceOrder ServiceOrder Service

Later on in this chapter, you’ll see examples of test doubles in action. For example, section 9.2.5 shows how to test the OrderController class in isolation by using a test double for the OrderService class. In that example, the OrderService test double is implemented using Mockito, a popular mock object framework for Java. Chapter 10 shows how to test Order Service using test doubles for the other services that it invokes. Those test doubles respond to command messages sent by Order Service.

现在让我们看看不同类型的测试。

Let’s now look at the different types of tests.

不同类型的测试

有许多不同类型的测试。某些测试(如性能测试和可用性测试)验证应用程序 满足其服务质量要求。在本章中,我将重点介绍验证功能方面的自动化测试 应用程序或服务。我将介绍如何编写四种不同类型的测试:

There are many different types of tests. Some tests, such as performance tests and usability tests, verify that the application satisfies its quality of service requirements. In this chapter, I focus on automated tests that verify the functional aspects of the application or service. I describe how to write four different types of tests:

  • 单元测试 - 测试服务的一小部分,例如类。
  • Unit testsTest a small part of a service, such as a class.
  • 集成测试验证服务是否可以与基础设施服务(如数据库和其他应用程序服务)进行交互。
  • Integration testsVerify that a service can interact with infrastructure services such as databases and other application services.
  • 组件测试 - 单个服务的验收测试。
  • Component testsAcceptance tests for an individual service.
  • 端到端测试整个应用程序的验收测试。
  • End-to-end testsAcceptance tests for the entire application.

它们的主要区别在于范围。在频谱的一端是单元测试,它验证最小有意义的 program 元素。对于面向对象的语言(如 Java),这是一个类。光谱的另一端是端到端 测试,用于验证整个应用程序的行为。中间是组件测试,用于测试各个服务。 正如您将在下一章中看到的那样,集成测试的范围相对较小,但它们比纯单元更复杂 测试。范围只是描述测试的一种方式。另一种方法是使用测试象限。

They differ primarily in scope. At one end of the spectrum are unit tests, which verify behavior of the smallest meaningful program element. For an object-oriented language such as Java, that’s a class. At the other end of the spectrum are end-to-end tests, which verify the behavior of an entire application. In the middle are component tests, which test individual services. Integration tests, as you’ll see in the next chapter, have a relatively small scope, but they’re more complex than pure unit tests. Scope is only one way of characterizing tests. Another way is to use the test quadrant.

编译时单元测试

测试是开发过程中不可或缺的一部分。现代开发工作流程是编辑代码,然后运行测试。此外,如果 您是测试驱动开发 (TDD) 从业者,您通过首先编写失败的测试来开发新功能或修复错误 然后编写代码以使其通过。即使您不是 TDD 的拥护者,修复 bug 的一个很好的方法是编写一个 test 重现 bug,然后编写修复 bug 的代码。

Testing is an integral part of development. The modern development workflow is to edit code, then run tests. Moreover, if you’re a Test-Driven Development (TDD) practitioner, you develop a new feature or fix a bug by first writing a failing test and then writing the code to make it pass. Even if you’re not a TDD adherent, an excellent way to fix a bug is to write a test that reproduces the bug and then write the code that fixes it.

您作为此工作流程的一部分运行的测试称为编译时测试。在现代 IDE(如 IntelliJ IDEA 或 Eclipse)中,您通常不会将代码编译为单独的步骤。而 您只需按一下键即可编译代码并运行测试。为了保持在流程中,需要执行这些测试 快速 - 理想情况下,不超过几秒钟。

The tests that you run as part of this workflow are known as compile-time tests. In a modern IDE, such as IntelliJ IDEA or Eclipse, you typically don’t compile your code as a separate step. Rather, you use a single keystroke to compile the code and run the tests. In order to stay in the flow, these tests need to execute quickly—ideally, no more than a few seconds.

使用测试象限对测试进行分类

对测试进行分类的一个好方法是 Brian Marick 的检验象限www.exampler.com/old-blog/2003/08/21/#agile-testing-project-1)。如图 9.4 所示的测试象限按两个维度对测试进行分类:

A good way to categorize tests is Brian Marick’s test quadrant (www.exampler.com/old-blog/2003/08/21/#agile-testing-project-1). The test quadrant, shown in figure 9.4, categorizes tests along two dimensions:

  • 无论测试是面向业务还是面向技术面向业务的测试使用领域专家的术语进行描述,而面向技术的测试 使用开发人员的术语和实现。
  • Whether the test is business facing or technology facingA business-facing test is described using the terminology of a domain expert, whereas a technology-facing test is described using the terminology of developers and the implementation.
  • 无论测试的目标是支持编程还是批评应用程序开发人员将支持编程的测试作为其日常工作的一部分。批评应用程序的测试旨在识别 需要改进的领域。
  • Whether the goal of the test is to support programming or critique the applicationDevelopers use tests that support programming as part of their daily work. Tests that critique the application aim to identify areas that need improvement.

图 9.4.测试象限按两个维度对测试进行分类。第一个维度是测试是面向业务还是技术 面临。第二个问题是测试的目的是支持编程还是批评应用程序。

测试象限定义了四种不同类别的测试:

The test quadrant defines four different categories of tests:

  • 问 1支持面向编程/技术:单元和集成测试
  • Q1Support programming/technology facing: unit and integration tests
  • 问2支持编程/面向业务:组件和端到端测试
  • Q2Support programming/business facing: component and end-to-end test
  • 问 3面向应用程序的批判性应用程序/业务:可用性和探索性测试
  • Q3Critique application/business facing: usability and exploratory testing
  • 问 4面向应用程序的批评/技术:非功能性验收测试,例如性能测试
  • Q4Critique application/technology facing: nonfunctional acceptance tests such as performance tests

测试象限并不是组织测试的唯一方式。还有测试金字塔,它为多少 要编写的每种类型的测试。

The test quadrant isn’t the only way of organizing tests. There’s also the test pyramid, which provides guidance on how many tests of each type to write.

使用测试金字塔作为集中测试工作的指南

我们必须编写不同类型的测试,以便确信我们的应用程序可以正常工作。然而,挑战在于 测试的执行时间和复杂性会随着其范围的增加而增加。此外,测试的范围越大,移动性就越强 它拥有的部分,它变得越不可靠。不可靠的测试几乎和没有测试一样糟糕,因为如果你不能信任测试, 你可能会忽略失败。

We must write different kinds of tests in order to be confident that our application works. The challenge, though, is that the execution time and complexity of a test increase with its scope. Also, the larger the scope of a test and the more moving parts it has, the less reliable it becomes. Unreliable tests are almost as bad as no tests, because if you can’t trust a test, you’re likely to ignore failures.

光谱的一端是各个类的单元测试。它们执行速度快、易于编写且可靠。在 另一端是整个应用程序的端到端测试。这些往往很慢,很难写, 并且由于它们的复杂性,通常不可靠。因为我们没有无限的开发和测试预算,所以我们希望 专注于编写范围较小的测试,而不会影响测试套件的有效性。

On one end of the spectrum are unit tests for individual classes. They’re fast to execute, easy to write, and reliable. At the other end of the spectrum are end-to-end tests for the entire application. These tend to be slow, difficult to write, and often unreliable because of their complexity. Because we don’t have unlimited budget for development and testing, we want to focus on writing tests that have small scope without compromising the effectiveness of the test suite.

图 9.5 所示的测试金字塔是一个很好的指南(https://martinfowler.com/bliki/TestPyramid.html)。金字塔的底部是快速、简单和可靠的单元测试。金字塔的顶端是缓慢、复杂、 以及脆弱的端到端测试。与 USDA 食物金字塔一样,尽管更有用且争议更少 (https://en.wikipedia.org/wiki/History_of_USDA_nutrition_guides),但测试金字塔描述了每种测试类型的相对比例。

The test pyramid, shown in figure 9.5, is a good guide (https://martinfowler.com/bliki/TestPyramid.html). At the base of the pyramid are the fast, simple, and reliable unit tests. At the top of the pyramid are the slow, complex, and brittle end-to-end tests. Like the USDA food pyramid, although more useful and less controversial (https://en.wikipedia.org/wiki/History_of_USDA_nutrition_guides), the test pyramid describes the relative proportions of each type of test.

图 9.5.测试金字塔描述了您需要编写的每种测试类型的相对比例。当你在金字塔中向上移动时, 您应该编写越来越少的测试。

测试金字塔的关键思想是,随着我们在金字塔中向上移动,我们应该编写越来越少的测试。我们应该写 大量的单元测试和极少的端到端测试。正如您将在本章中看到的那样,我将介绍一种强调测试服务部分的策略。它甚至最大限度地减少了 组件测试的数量,用于测试整个服务。

The key idea of the test pyramid is that as we move up the pyramid we should write fewer and fewer tests. We should write lots of unit tests and very few end-to-end tests. As you’ll see in this chapter, I describe a strategy that emphasizes testing the pieces of a service. It even minimizes the number of component tests, which test an entire service.

很明显,如何测试单个微服务,例如 ,它们不依赖于任何其他服务。但是,像 这样的服务确实依赖于许多其他服务呢?我们如何确信应用程序作为一个整体可以工作呢?这是 测试具有微服务架构的应用程序的主要挑战。测试的复杂性已从 单独的服务分配给它们之间的交互。让我们看看如何解决这个问题。Consumer ServiceOrder Service

It’s clear how to test individual microservices such as Consumer Service, which don’t depend on any other services. But what about services such as Order Service, that do depend on numerous other services? And how can we be confident that the application as a whole works? This is the key challenge of testing applications that have a microservice architecture. The complexity of testing has moved from the individual services to the interactions between them. Let’s look at how to tackle this problem.

9.1.2. 测试微服务的挑战

9.1.2. The challenge of testing microservices

进程间通信在基于微服务的应用程序中比在整体式应用程序中发挥着更重要的作用。 整体式应用程序可能与一些外部客户端和服务通信。例如,单体版本的 FTGO 应用程序使用一些第三方 Web 服务,例如用于支付的 Stripe、用于消息收发的 Twilio 和 Amazon SES 对于具有稳定 API 的电子邮件。应用程序模块之间的任何交互都是通过基于编程语言的 蜜蜂属。进程间通信在很大程度上处于应用程序的边缘。

Interprocess communication plays a much more important role in a microservices-based application than in a monolithic application. A monolithic application might communicate with a few external clients and services. For example, the monolithic version of the FTGO application uses a few third-party web services, such as Stripe for payments, Twilio for messaging, and Amazon SES for email, which have stable APIs. Any interaction between the modules of the application is through programming language-based APIs. Interprocess communication is very much on the edge of the application.

相比之下,进程间通信是微服务架构的核心。基于微服务的应用程序是分布式的 系统。团队不断开发他们的服务并发展他们的 API。服务的开发人员必须 编写测试来验证其服务是否与其依赖项和客户端交互。

In contrast, interprocess communication is central to microservice architecture. A microservices-based application is a distributed system. Teams are constantly developing their services and evolving their APIs. It’s essential that developers of a service write tests that verify that their service interacts with its dependencies and clients.

第 3 章所述,服务使用各种交互方式和 IPC 机制相互通信。一些服务使用 request/response 样式 使用同步协议(如 REST 或 gRPC)实现的交互。其他服务通过 request/asynchronous 进行交互 使用异步消息传递进行回复或发布/订阅。例如,图 9.6 显示了 FTGO 应用程序中的一些服务是如何通信的。每个箭头都从使用者服务指向生产者 服务。

As described in chapter 3, services communicate with each other using a variety of interaction styles and IPC mechanisms. Some services use request/response-style interaction that’s implemented using a synchronous protocol, such as REST or gRPC. Other services interact through request/asynchronous reply or publish/subscribe using asynchronous messaging. For instance, figure 9.6 shows how some of the services in the FTGO application communicate. Each arrow points from a consumer service to a producer service.

图 9.6.FTGO 应用程序中的一些服务间通信。每个箭头都从使用者服务指向生产者服务。

箭头指向依赖项的方向,从 API 的使用者到 API 的提供者。假设 消费者对 API 的评价取决于交互的性质:

The arrow points in the direction of the dependency, from the consumer of the API to the provider of the API. The assumptions that a consumer makes about an API depend on the nature of the interaction:

  • REST 客户端服务 – API 网关将请求路由到服务并实施 API 组合。
  • REST clientservice—The API gateway routes requests to services and implements API composition.
  • 域事件使用者发布者 — 使用 发布的事件。Order History ServiceOrder Service
  • Domain event consumerpublisherOrder History Service consumes events published by Order Service.
  • Command message requestorreplier— 将命令消息发送到各种服务并使用回复。Order Service
  • Command message requestorreplierOrder Service sends command messages to various services and consumes the replies.

一对服务之间的每次交互都代表两个服务之间的协议或合同。 并且必须就事件消息结构和它们发布到的频道达成一致。同样,API 网关 并且服务必须在 REST API 端点上达成一致。它使用异步请求/响应调用的每个服务都必须就命令通道和 命令和回复消息。Order History ServiceOrder ServiceOrder Service

Each interaction between a pair of services represents an agreement or contract between the two services. Order History Service and Order Service must, for example, agree on the event message structure and the channel that they’re published to. Similarly, the API gateway and the services must agree on the REST API endpoints. And Order Service and each service that it invokes using asynchronous request/response must agree on the command channel and the format of the command and reply messages.

作为服务的开发人员,您需要确信您使用的服务具有稳定的 API。同样,您也不会 想要无意中对服务的 API 进行重大更改。例如,如果您正在处理 ,则需要确保服务依赖项(如 和 )的开发人员不会以与服务不兼容的方式更改其 API。同样,您必须确保不会更改 的 API 中断 or .Order ServiceConsumer ServiceKitchen ServiceOrder ServicesAPI GatewayOrder History Service

As a developer of a service, you need to be confident that the services you consume have stable APIs. Similarly, you don’t want to unintentionally make breaking changes to your service’s API. For example, if you’re working on Order Service, you want to be sure that the developers of your service’s dependencies, such as Consumer Service and Kitchen Service, don’t change their APIs in ways that are incompatible with your service. Similarly, you must ensure that you don’t change the Order Services’s API in a way that breaks the API Gateway or Order History Service.

验证两个服务是否可以交互的一种方法是运行两个服务,调用触发通信的 API,然后 验证它是否具有预期的结果。这肯定会发现集成问题,但基本上是端到端的。 该测试可能需要运行这些服务的许多其他传递依赖项。测试可能还需要调用 复杂、高级的功能,例如业务逻辑,即使其目标是测试相对较低的 IPC。这是最好的 以避免编写此类端到端测试。不知何故,我们需要编写更快、更简单、更可靠的测试,理想情况下 单独测试服务。解决方案是使用所谓的消费者驱动的合同测试

One way to verify that two services can interact is to run both services, invoke an API that triggers the communication, and verify that it has the expected outcome. This will certainly catch integration problems, but it’s basically an end-to-end. The test likely would need to run numerous other transitive dependencies of those services. A test might also need to invoke complex, high-level functionality such as business logic, even if its goal is to test relatively low-level IPC. It’s best to avoid writing end-to-end tests like these. Somehow, we need to write faster, simpler, and more reliable tests that ideally test services in isolation. The solution is to use what’s known as consumer-driven contract testing.

消费者驱动的合同测试

假设您是第 8 章中描述的 开发 团队的成员。的 调用各种 REST 端点,包括端点。我们必须编写测试来验证这一点并就 API 达成一致。在消费者契约测试的术语中,这两个服务参与消费者-提供者关系。 是使用者,并且是提供者。使用者协定测试是针对提供者(如 )的集成测试,用于验证其 API 是否符合使用者的期望(如 )。API GatewayAPI GatewayOrderServiceProxyGET /orders/{orderId}API GatewayOrder ServiceAPI GatewayOrder ServiceOrder ServiceAPI Gateway

Imagine that you’re a member of the team developing API Gateway, described in chapter 8. The API Gateway’s OrderServiceProxy invokes various REST endpoints, including the GET /orders/{orderId} endpoint. It’s essential that we write tests that verify that API Gateway and Order Service agree on an API. In the terminology of consumer contract testing, the two services participate in a consumer-provider relationship. API Gateway is a consumer, and Order Service is a provider. A consumer contract test is an integration test for a provider, such as Order Service, that verifies that its API matches the expectations of a consumer, such as API Gateway.

使用者协定测试侧重于验证提供者的 API 的 “形状” 是否满足使用者的期望。为 一个 REST 端点,则合约测试验证提供程序是否实现了一个端点,该端点

A consumer contract test focuses on verifying that the “shape” of a provider’s API meets the consumer’s expectations. For a REST endpoint, a contract test verifies that the provider implements an endpoint that

  • 具有预期的 HTTP 方法和路径
  • Has the expected HTTP method and path
  • 接受预期的标头(如果有)
  • Accepts the expected headers, if any
  • 接受请求正文(如果有)
  • Accepts a request body, if any
  • 返回包含预期状态代码、标头和正文的响应
  • Returns a response with the expected status code, headers, and body

请务必记住,Contract 测试不会彻底测试提供程序的业务逻辑。这就是 unit 的工作 测试。稍后,您将看到 REST API 的消费者契约测试实际上是模拟控制器测试。

It’s important to remember that contract tests don’t thoroughly test the provider’s business logic. That’s the job of unit tests. Later on, you’ll see that consumer contract tests for a REST API are in fact mock controller tests.

开发使用者的团队编写一个合约测试套件,并将其(例如,通过拉取请求)添加到提供者的 测试套件。调用 的其他服务的开发人员也提供了一个测试套件,如图 9.7 所示。每个测试套件都将测试与每个使用者相关的 API 的那些方面。例如,的测试套件验证是否发布了预期的事件。Order ServiceOrder ServiceOrder History ServiceOrder Service

The team that develops the consumer writes a contract test suite and adds it (for example, via a pull request) to the provider’s test suite. The developers of other services that invoke Order Service also contribute a test suite, as shown in figure 9.7. Each test suite will test those aspects of Order Service’s API that are relevant to each consumer. The test suite for Order History Service, for example, verifies that Order Service publishes the expected events.

图 9.7.开发使用 API 的服务的每个团队都贡献一个合同测试套件。测试套件验证 API 是否符合消费者的期望。本测试 Suite 以及其他团队提供的 Suite 由 的部署管道运行。Order ServiceOrder Service

这些测试套件由 的部署管道执行。如果使用者协定测试失败,则该失败会告诉生产者团队他们对 API 进行了重大更改。 他们必须修复 API 或与消费者团队交谈。Order Service

These test suites are executed by the deployment pipeline for Order Service. If a consumer contract test fails, that failure tells the producer team that they’ve made a breaking change to the API. They must either fix the API or talk to the consumer team.

模式:消费者驱动的协定测试

验证服务是否满足其客户的期望 请参阅 http://microservices.io/patterns/testing/service-integration-contract-test.html

Verify that a service meets the expectations of its clients See http://microservices.io/patterns/testing/service-integration-contract-test.html.

使用者驱动的协定测试通常使用按示例测试。定义使用者和提供者之间的交互 通过一组示例,称为 Contracts。每个 Contract 都包含在一次交互期间交换的示例消息。例如,REST API 的协定由示例 HTTP 请求和响应组成。从表面上看,它似乎更好 以使用使用 OpenAPI 或 JSON 架构等编写的架构来定义交互。但事实证明,架构并非如此 这在编写测试时很有用。测试可以使用 schema 验证响应,但它仍然需要调用提供程序 替换为示例请求。

Consumer-driven contract tests typically use testing by example. The interaction between a consumer and provider is defined by a set of examples, known as contracts. Each contract consists of example messages that are exchanged during one interaction. For instance, a contract for a REST API consists of an example HTTP request and response. On the surface, it may seem better to define the interaction using schemas written using, for example, OpenAPI or JSON schema. But it turns out schemas aren’t that useful when writing tests. A test can validate the response using the schema but it still needs to invoke the provider with an example request.

此外,消费者测试还需要示例响应。这是因为即使消费者驱动的合同的重点 testing 是测试一个 provider 的,contract 也用来验证 consumer 是否符合 Contract。例如 REST 客户端的使用者端合约测试使用合约来配置 HTTP 存根服务,以验证 HTTP 请求匹配合约的请求并发回合约的 HTTP 响应。测试交互的两端 确保使用者和提供者就 API 达成一致。稍后我们将查看如何编写此类测试的示例, 但首先让我们看看如何使用 Spring Cloud Contract 编写消费者契约测试。

What’s more, consumer tests also need example responses. That’s because even though the focus of consumer-driven contract testing is to test a provider, contracts are also used to verify that the consumer conforms to the contract. For instance, a consumer-side contract test for a REST client uses the contract to configure an HTTP stub service that verifies that the HTTP request matches the contract’s request and sends back the contract’s HTTP response. Testing both sides of interaction ensures that the consumer and provider agree on the API. Later on we’ll look at examples of how to write this kind of testing, but first let’s see how to write consumer contract tests using Spring Cloud Contract.

模式:消费者端合约测试

验证服务的客户端是否可以与服务通信。请参阅 https://microservices.io/patterns/testing/consumer-side-contract-test.html

Verify that the client of a service can communicate with the service. See https://microservices.io/patterns/testing/consumer-side-contract-test.html.

使用 Spring Cloud Contract 测试服务

两种流行的契约测试框架是 Spring Cloud Contract (https://cloud.spring.io/spring-cloud-contract/),它是 Spring 应用程序的使用者契约测试框架,以及 Pact 系列框架 (https://github.com/pact-foundation),它支持多种语言。FTGO 应用程序是基于 Spring 框架的应用程序,因此在本章中 我将介绍如何使用 Spring Cloud Contract。它提供了一种 Groovy 域特定语言 (DSL) 来编写 Contract。 每个协定都是使用者和提供者之间交互的具体示例,例如 HTTP 请求和响应。 Spring Cloud Contract 代码为提供者生成 Contract 测试。它还配置了 mock,例如 mock HTTP 服务器、 用于消费者集成测试。

Two popular contract testing frameworks are Spring Cloud Contract (https://cloud.spring.io/spring-cloud-contract/), which is a consumer contract testing framework for Spring applications, and the Pact family of frameworks (https://github.com/pact-foundation), which support a variety of languages. The FTGO application is a Spring framework-based application, so in this chapter I’m going to describe how to use Spring Cloud Contract. It provides a Groovy domain-specific language (DSL) for writing contracts. Each contract is a concrete example of an interaction between a consumer and a provider, such as an HTTP request and response. Spring Cloud Contract code generates contract tests for the provider. It also configures mocks, such as a mock HTTP server, for consumer integration tests.

例如,假设您正在处理并希望为 编写一个消费者协定测试。图 9.8 显示了需要您与团队协作的过程。您可以编写 Contract 来定义如何与 交互。团队使用这些合同进行测试 ,而您使用它们进行测试 。步骤顺序如下:API GatewayOrder ServiceOrder ServiceAPI GatewayOrder ServiceOrder ServiceOrder ServiceAPI Gateway

Say, for example, you’re working on API Gateway and want to write a consumer contract test for Order Service. Figure 9.8 shows the process, which requires you to collaborate with Order Service teams. You write contracts that define how API Gateway interacts with Order Service. The Order Service team uses these contracts to test Order Service, and you use them to test API Gateway. The sequence of steps is as follows:

  1. 您编写一个或多个 Contract,例如 Listing 9.1 中所示的 Contract。每个协定都包含一个可能发送到的 HTTP 请求和一个预期的 HTTP 响应。你可以将合同(可能通过 Git 拉取请求)提供给团队。API GatewayOrder ServiceOrder Service
  2. You write one or more contracts, such as the one shown in listing 9.1. Each contract consists of an HTTP request that API Gateway might send to Order Service and an expected HTTP response. You give the contracts, perhaps via a Git pull request, to the Order Service team.
  3. 该团队使用使用者 Contract 测试进行测试,Spring Cloud Contract 代码从 Contract 生成。Order ServiceOrder Service
    图 9.8.团队编写合同。团队使用这些合同进行测试并将其发布到存储库。团队使用已发布的合同来测试 .API GatewayOrder ServiceOrder ServiceAPI GatewayAPI Gateway

  4. The Order Service team tests Order Service using consumer contract tests, which Spring Cloud Contract code generates from contracts.
    Figure 9.8. The API Gateway team writes the contracts. The Order Service team uses those contracts to test Order Service and publishes them to a repository. The API Gateway team uses the published contracts to test API Gateway.

  5. 团队将测试的合同发布到 Maven 存储库。Order ServiceOrder Service
  6. The Order Service team publishes the contracts that tested Order Service to a Maven repository.
  7. 您可以使用已发布的协定为 编写测试。API Gateway
  8. You use the published contracts to write tests for API Gateway.

由于您使用已发布的 Contract 进行测试,因此您可以确信它与已部署的 .API GatewayOrder Service

Because you test API Gateway using the published contracts, you can be confident that it works with the deployed Order Service.

合约是此测试策略的关键部分。下面的清单显示了一个示例 Spring Cloud Contract。它 由 HTTP 请求和 HTTP 响应组成。

The contracts are the key part of this testing strategy. The following listing shows an example Spring Cloud Contract. It consists of an HTTP request and an HTTP response.

清单 9.1.描述如何调用API GatewayOrder Service
org.springframework.cloud.contract.spec.Contract.make {
    request {                                             1
        method 'GET'
        url '/orders/1223232'
    }
    response {                                            2
        status 200
        headers {
            header('Content-Type': 'application/json;charset=UTF-8')
        }
        body("{ ... }")
    }
}
org.springframework.cloud.contract.spec.Contract.make {
    request {                                             1
        method 'GET'
        url '/orders/1223232'
    }
    response {                                            2
        status 200
        headers {
            header('Content-Type': 'application/json;charset=UTF-8')
        }
        body("{ ... }")
    }
}

  • 1 HTTP 请求的方法和路径
  • 1 The HTTP request’s method and path
  • 2 HTTP 响应的状态代码、标头和正文
  • 2 The HTTP response’s status code, headers, and body

request 元素是 REST 端点的 HTTP 请求。响应元素是一个 HTTP 响应,它描述预期的 。Groovy Contract 是提供程序代码库的一部分。每个 Consumer 团队编写 Contract 来描述他们的服务 与提供者交互,并可能通过 Git 拉取请求将它们提供给提供者团队。提供商团队负责 用于将合同打包为 JAR 并将其发布到 Maven 存储库。使用者端测试从 存储库。GET /orders/{orderId}OrderAPI Gateway

The request element is an HTTP request for the REST endpoint GET /orders/{orderId}. The response element is an HTTP response that describes an Order expected by API Gateway. The Groovy contracts are part of the provider’s code base. Each consumer team writes contracts that describe how their service interacts with the provider and gives them, perhaps via a Git pull request, to the provider team. The provider team is responsible for packaging the contracts as a JAR and publishing them to a Maven repository. The consumer-side tests download the JAR from the repository.

每个合约的请求和响应都扮演着测试数据和预期行为规范的双重角色。在消费者端 test 中,该合约用于配置一个 stub,它类似于 Mockito mock 对象,模拟 .它可以在不运行 .在提供者端测试中,生成的测试类使用合约的请求调用 provider 并验证它是否 返回与 Contract 的响应匹配的响应。下一章将讨论如何使用 Spring Cloud 的详细信息 Contract,但现在我们将了解如何对消息传递 API 使用消费者契约测试。Order ServiceAPI GatewayOrder Service

Each contract’s request and response play dual roles of test data and the specification of expected behavior. In a consumer-side test, the contract is used to configure a stub, which is similar to a Mockito mock object and simulates the behavior of Order Service. It enables API Gateway to be tested without running Order Service. In the provider-side test, the generated test class invokes the provider with the contract’s request and verifies that it returns a response that matches the contract’s response. The next chapter discusses the details of how to use Spring Cloud Contract, but now we’re going to look at how to use consumer contract testing for messaging APIs.

消息收发 API 的使用者协定测试

REST 客户端并不是唯一一种对提供商的 API 有期望的使用者。订阅 domain 的服务 事件和使用基于异步请求/响应的通信也是使用者。它们使用其他服务的消息 API 的 API 进行验证,并对该 API 的性质做出假设。我们还必须为这些服务编写 Consumer Contract 测试。

A REST client isn’t the only kind of consumer that has expectations of a provider’s API. Services that subscribe to domain events and use asynchronous request/response-based communication are also consumers. They consume some other service’s messaging API, and make assumptions about the nature of that API. We must also write consumer contract tests for these services.

Spring Cloud Contract 还支持测试基于消息传递的交互。合同的结构以及如何 它由测试使用,具体取决于交互的类型。域事件发布的合同由一个示例域组成 事件。提供程序测试会导致提供程序发出事件并验证它是否与协定的事件匹配。消费者 test 验证使用者是否可以处理该事件。在下一章中,我将介绍一个示例测试。

Spring Cloud Contract also provides support for testing messaging-based interactions. The structure of a contract and how it’s used by the tests depend on the type of interaction. A contract for domain event publishing consists of an example domain event. A provider test causes the provider to emit an event and verifies that it matches the contract’s event. A consumer test verifies that the consumer can handle that event. In the next chapter, I describe an example test.

异步请求/响应交互的 Contract 类似于 HTTP Contract 。它由请求消息组成 和响应消息。提供者测试使用合约的请求消息调用 API,并验证响应 匹配 Contract 的响应。消费者测试使用合约来配置存根订阅者,该订阅者监听合约的 request 消息并使用指定的响应进行回复。下一章将讨论一个示例测试。但首先我们要采取 查看运行这些测试和其他测试的部署管道。

A contract for an asynchronous request/response interaction is similar to an HTTP contract. It consists of a request message and a response message. A provider test invokes the API with the contract’s request message and verifies that the response matches the contract’s response. A consumer test uses the contract to configure a stub subscriber, which listens for the contract’s request message and replies with the specified response. The next chapter discusses an example test. But first we’ll take a look at the deployment pipeline, which runs these and other tests.

9.1.3. 部署管道

9.1.3. The deployment pipeline

每个服务都有一个部署管道。Jez Humble 的书《持续交付》(Addison-Wesley,2010 年)将部署管道描述为将代码从开发人员的桌面导入生产的自动化过程。如图 9.9 所示,它由一系列执行测试套件的阶段组成,然后是发布或部署服务的阶段。理想情况下,它是完全的 automated,但它可能包含手动步骤。部署管道通常使用持续集成 (CI) 实现 服务器,例如 Jenkins。

Every service has a deployment pipeline. Jez Humble’s book, Continuous Delivery (Addison-Wesley, 2010) describes a deployment pipeline as the automated process of getting code from the developer’s desktop into production. As figure 9.9 shows, it consists of a series of stages that execute test suites, followed by a stage that releases or deploys the service. Ideally, it’s fully automated, but it might contain manual steps. A deployment pipeline is often implemented using a Continuous Integration (CI) server, such as Jenkins.

图 9.9.的部署管道示例。它由一系列阶段组成。开发人员在提交代码之前运行预提交测试。剩余的 阶段由自动化工具(如 Jenkins CI 服务器)执行。Order Service

随着代码流经管道,测试套件会在 更像生产。同时,每个测试套件的执行时间通常会增加。这个想法是提供 尽快提供有关测试失败的反馈。

As code flows through the pipeline, the test suites subject it to increasingly more thorough testing in environments that are more production like. At the same time, the execution time of each test suite typically grows. The idea is to provide feedback about test failures as rapidly as possible.

图 9.9 中所示的示例部署管道包括以下阶段:

The example deployment pipeline shown in figure 9.9 consists of the following stages:

  • Pre-commit tests 阶段运行单元测试。这是由开发人员在提交更改之前执行的。
  • Pre-commit tests stageRuns the unit tests. This is executed by the developer before committing their changes.
  • 提交测试阶段编译服务,运行单元测试,并执行静态代码分析。
  • Commit tests stageCompiles the service, runs the unit tests, and performs static code analysis.
  • 集成测试阶段运行集成测试。
  • Integration tests stageRuns the integration tests.
  • 组件测试阶段运行服务的组件测试。
  • Component tests stageRuns the component tests for the service.
  • 部署阶段将服务部署到生产环境中。
  • Deploy stageDeploys the service into production.

当开发人员提交更改时,CI 服务器将运行提交阶段。它的执行速度非常快,因此它提供快速 有关提交的反馈。后期阶段需要更长的时间来运行,因此提供的即时反馈较少。如果所有测试都通过,则 最后阶段是此管道将其部署到生产环境中。

The CI server runs the commit stage when a developer commits a change. It executes extremely quickly, so it provides rapid feedback about the commit. The later stages take longer to run, providing less immediate feedback. If all the tests pass, the final stage is when this pipeline deploys it into production.

在此示例中,部署管道从提交到部署的整个过程中都是完全自动化的。但是,也有一些情况 需要手动步骤。例如,您可能需要一个手动测试阶段,例如暂存环境。在这种情况下, 当测试人员单击一个按钮以指示代码成功时,代码将进入下一阶段。或者,部署 管道将发布服务的新版本。稍后,发布的服务将打包到产品版本中 并运送给客户。

In this example, the deployment pipeline is fully automated all the way from commit to deployment. There are, however, situations that require manual steps. For example, you might need a manual testing stage, such as a staging environment. In such a scenario, the code progresses to the next stage when a tester clicks a button to indicate that it was successful. Alternatively, a deployment pipeline for an on-premise product would release the new version of the service. Later on, the released services would be packaged into a product release and shipped to customers.

现在我们已经了解了部署管道的组织以及它何时执行不同类型的测试,让我们 前往测试金字塔的底部,了解如何为服务编写单元测试。

Now that we’ve looked at the organization of the deployment pipeline and when it executes the different types of tests, let’s head to the bottom of the test pyramid and look at how to write unit tests for a service.

9.2. 为服务编写单元测试

9.2. Writing unit tests for a service

假设您要编写一个测试来验证 FTGO 应用程序是否正确计算了 .您可以编写运行 的测试,调用其 REST API 来创建 ,并检查 HTTP 响应是否包含预期值。这种方法的缺点是,不仅 复杂,也很慢。如果这些测试是类的编译时测试,您将浪费大量时间等待它完成。一种更高效的方法是为类编写单元测试。Order ServiceOrderOrder ServiceOrderOrderOrder

Imagine that you want to write a test that verifies that the FTGO application’s Order Service correctly calculates the subtotal of an Order. You could write tests that run Order Service, invoke its REST API to create an Order, and check that the HTTP response contains the expected values. The drawback of this approach is that not only is the test complex, it’s also slow. If these tests were the compile-time tests for the Order class, you’d waste a lot of time waiting for it to finish. A much more productive approach is to write unit tests for the Order class.

如图 9.10 所示,单元测试是测试金字塔的最低层。它们是支持开发的面向技术的测试。一个单位 test 验证 Unit (服务的一小部分) 是否正常工作。单元通常是一个类,因此单元测试的目标是 验证它是否按预期运行。

As figure 9.10 shows, unit tests are the lowest level of the test pyramid. They’re technology-facing tests that support development. A unit test verifies that a unit, which is a very small part of a service, works correctly. A unit is typically a class, so the goal of unit testing is to verify that it behaves as expected.

图 9.10.单元测试是金字塔的基础。它们运行速度快、易于编写且可靠。单独的单元测试测试类 单独使用 mock 或 stub 作为其依赖项。社交单元测试测试类及其依赖项。

有两种类型的单元测试 (https://martinfowler.com/bliki/UnitTest.html):

There are two types of unit tests (https://martinfowler.com/bliki/UnitTest.html):

  • 单独单元测试使用类依赖项的 mock 对象隔离测试类
  • Solitary unit testTests a class in isolation using mock objects for the class’s dependencies
  • 社交单元测试测试类及其依赖项
  • Sociable unit testTests a class and its dependencies

类的职责及其在体系结构中的角色决定了要使用的测试类型。图 9.11 显示了典型服务的六边形架构,以及您通常用于每种 类。控制器和服务类通常使用单独的单元测试进行测试。域对象,例如 entities 和 value 对象通常使用社交单元测试进行测试。

The responsibilities of the class and its role in the architecture determine which type of test to use. Figure 9.11 shows the hexagonal architecture of a typical service and the type of unit test that you’ll typically use for each kind of class. Controller and service classes are often tested using solitary unit tests. Domain objects, such as entities and value objects, are typically tested using sociable unit tests.

图 9.11.类的职责决定了是使用单独单元测试还是社交单元测试。

每个类的典型测试策略如下:

The typical testing strategy for each class is as follows:

  • 实体(如第 5 章中所述)是具有持久标识的对象,使用社交单元测试进行测试。Order
  • Entities, such as Order, which as described in chapter 5 are objects with persistent identity, are tested using sociable unit tests.
  • 值对象(如第 5 章中所述)是作为值集合的对象,使用社交单元测试进行测试。Money
  • Value objects, such as Money, which as described in chapter 5 are objects that are collections of values, are tested using sociable unit tests.
  • Sagas(如第 4 章中所述)在服务之间保持数据一致性,使用社交单元测试进行测试。CreateOrderSaga
  • Sagas, such as CreateOrderSaga, which as described in chapter 4 maintain data consistency across services, are tested using sociable unit tests.
  • 域服务(如第 5 章中所述)是实现不属于实体或值对象的业务逻辑的类,使用单独的单元进行测试 测试。OrderService
  • Domain services, such as OrderService, which as described in chapter 5 are classes that implement business logic that doesn’t belong in entities or value objects, are tested using solitary unit tests.
  • 控制器(例如处理 HTTP 请求的控制器)使用单独的单元测试进行测试。OrderController
  • Controllers, such as OrderController, which handle HTTP requests, are tested using solitary unit tests.
  • 入站和出站消息网关使用单独的单元测试进行测试。
  • Inbound and outbound messaging gateways are tested using solitary unit tests.

让我们首先看看如何测试实体。

Let’s begin by looking at how to test entities.

9.2.1. 为实体开发单元测试

9.2.1. Developing unit tests for entities

下面的清单显示了 class 的摘录,它实现了实体的单元测试。该类有一个方法,用于在运行每个测试之前创建一个。它的方法可能会进一步初始化 、调用其方法之一,然后对 的返回值和状态进行断言。OrderTestOrder@Before setUp()Order@TestOrderOrder

The following listing shows an excerpt of OrderTest class, which implements the unit tests for the Order entity. The class has an @Before setUp() method that creates an Order before running each test. Its @Test methods might further initialize Order, invoke one of its methods, and then make assertions about the return value and the state of Order.

清单 9.2.一个简单、快速运行的实体单元测试Order
public class OrderTest {

  private ResultWithEvents<Order> createResult;
  private Order order;

  @Before
  public void setUp() throws Exception {
    createResult = Order.createOrder(CONSUMER_ID, AJANTA_ID, CHICKEN_VINDALOO
     _LINE_ITEMS);
    order = createResult.result;
  }

  @Test
  public void shouldCalculateTotal() {
    assertEquals(CHICKEN_VINDALOO_PRICE.multiply(CHICKEN_VINDALOO_QUANTITY),
     order.getOrderTotal());
  }

  ...

}
public class OrderTest {

  private ResultWithEvents<Order> createResult;
  private Order order;

  @Before
  public void setUp() throws Exception {
    createResult = Order.createOrder(CONSUMER_ID, AJANTA_ID, CHICKEN_VINDALOO
     _LINE_ITEMS);
    order = createResult.result;
  }

  @Test
  public void shouldCalculateTotal() {
    assertEquals(CHICKEN_VINDALOO_PRICE.multiply(CHICKEN_VINDALOO_QUANTITY),
     order.getOrderTotal());
  }

  ...

}

该方法验证是否返回预期值。单元测试全面测试业务逻辑。它们是类及其依赖项的社交单元测试。您可以将它们用作编译时测试,因为它们的执行速度非常快。该类依赖于 value 对象,因此测试该类也很重要。让我们看看如何做到这一点。@Test shouldCalculateTotal()Order.getOrderTotal()OrderOrderMoney

The @Test shouldCalculateTotal() method verifies that Order.getOrderTotal() returns the expected value. Unit tests thoroughly test the business logic. They are sociable unit tests for the Order class and its dependencies. You can use them as compile-time tests because they execute extremely quickly. The Order class relies on the Money value object, so it’s important to test that class as well. Let’s see how to do that.

9.2.2. 为值对象编写单元测试

9.2.2. Writing unit tests for value objects

Value 对象是不可变的,因此它们往往很容易测试。您不必担心副作用。值测试 object 通常会创建一个处于特定状态的 Value 对象,调用它的一个方法,并对 返回值。清单 9.3 显示了 value 对象的测试,它是一个表示 money 值的简单类。这些测试验证类的方法的行为,包括 、 、 添加两个对象和 、 将对象乘以整数。它们是单独的测试,因为该类不依赖于任何其他应用程序类。MoneyMoneyadd()Moneymultiply()MoneyMoney

Value objects are immutable, so they tend to be easy to test. You don’t have to worry about side effects. A test for a value object typically creates a value object in a particular state, invokes one of its methods, and makes assertions about the return value. Listing 9.3 shows the tests for the Money value object, which is a simple class that represents a money value. These tests verify the behavior of the Money class’s methods, including add(), which adds two Money objects, and multiply(), which multiplies a Money object by an integer. They are solitary tests because the Money class doesn’t depend on any other application classes.

清单 9.3.对 value 对象进行简单、快速运行的测试Money
public class MoneyTest {

  private final int M1_AMOUNT = 10;
  private final int M2_AMOUNT = 15;

  private Money m1 = new Money(M1_AMOUNT);
  private Money m2 = new Money(M2_AMOUNT);

  @Test
  public void shouldAdd() {                                       1
     assertEquals(new Money(M1_AMOUNT + M2_AMOUNT), m1.add(m2));
  }

  @Test
  public void shouldMultiply() {                                  2
    int multiplier = 12;
    assertEquals(new Money(M2_AMOUNT * multiplier), m2.multiply(multiplier));
  }

  ...
}
public class MoneyTest {

  private final int M1_AMOUNT = 10;
  private final int M2_AMOUNT = 15;

  private Money m1 = new Money(M1_AMOUNT);
  private Money m2 = new Money(M2_AMOUNT);

  @Test
  public void shouldAdd() {                                       1
     assertEquals(new Money(M1_AMOUNT + M2_AMOUNT), m1.add(m2));
  }

  @Test
  public void shouldMultiply() {                                  2
    int multiplier = 12;
    assertEquals(new Money(M2_AMOUNT * multiplier), m2.multiply(multiplier));
  }

  ...
}

  • 1 验证是否可以将两个 Money 对象添加在一起。
  • 1 Verify that two Money objects can be added together.
  • 2 验证 Money 对象是否可以乘以整数。
  • 2 Verify that a Money object can be multiplied by an integer.

实体和值对象是服务业务逻辑的构建块。但一些业务逻辑也存在于 服务的 Sagas 和服务。让我们看看如何测试这些。

Entities and value objects are the building blocks of a service’s business logic. But some business logic also resides in the service’s sagas and services. Let’s look at how to test those.

9.2.3. 为 saga 开发单元测试

9.2.3. Developing unit tests for sagas

一个 saga(比如类)实现了重要的业务逻辑,所以需要测试。它是一个持久对象,用于将命令消息发送到 saga 参与者并处理他们的回复。如第 4 章所述,与多种服务交换命令/回复消息,例如 和 。此类的测试将创建一个 saga 并验证它是否将预期的消息序列发送给 saga 参与者。 你需要编写的一个测试是 happy path。您还必须为 saga 滚动的各种场景编写测试 back (返回),因为 Saga 参与者发回了一条失败消息。CreateOrderSagaCreateOrderSagaConsumer ServiceKitchen Service

A saga, such as the CreateOrderSaga class, implements important business logic, so needs to be tested. It’s a persistent object that sends command messages to saga participants and processes their replies. As described in chapter 4, CreateOrderSaga exchanges command/reply messages with several services, such as Consumer Service and Kitchen Service. A test for this class creates a saga and verifies that it sends the expected sequence of messages to the saga participants. One test you need to write is for the happy path. You must also write tests for the various scenarios where the saga rolls back because a saga participant sent back a failure message.

一种方法是编写测试,使用真实的数据库和消息代理以及存根来模拟各种 SAGA 参与者。例如,的存根将订阅命令通道并发回所需的回复消息。但是使用这种方法编写的测试会非常慢。一个 更有效的方法是编写测试来模拟那些与数据库和 Message Broker 交互的类。那 方式,我们可以专注于测试 Saga 的核心职责。Consumer ServiceconsumerService

One approach would be to write tests that use a real database and message broker along with stubs to simulate the various saga participants. For example, a stub for Consumer Service would subscribe to the consumerService command channel and send back the desired reply message. But tests written using this approach would be quite slow. A much more effective approach is to write tests that mock those classes that interact with the database and message broker. That way, we can focus on testing the saga’s core responsibility.

清单 9.4 显示了 的 测试。它是一个社交单元测试,用于测试 saga 类及其依赖项。它是使用 Eventuate Tram Saga 测试编写的 框架 (https://github.com/eventuate-tram/eventuate-tram-sagas)。此框架提供了一个易于使用的 DSL,它抽象了与 Sagas 交互的细节。借助此 DSL,您可以 可以创建 Saga 并验证它是否发送正确的命令消息。在后台,Saga 测试框架配置了 带有数据库和消息传递基础设施 mock 的 Saga 框架。CreateOrderSaga

Listing 9.4 shows a test for CreateOrderSaga. It’s a sociable unit test that tests the saga class and its dependencies. It’s written using the Eventuate Tram Saga testing framework (https://github.com/eventuate-tram/eventuate-tram-sagas). This framework provides an easy-to-use DSL that abstracts away the details of interacting with sagas. With this DSL, you can create a saga and verify that it sends the correct command messages. Under the covers, the Saga testing framework configures the Saga framework with mocks for the database and messaging infrastructure.

清单 9.4.一个简单、快速运行的单元测试CreateOrderSaga
public class CreateOrderSagaTest {

  @Test
  public void shouldCreateOrder() {
    given()
        .saga(new CreateOrderSaga(kitchenServiceProxy),                1
                 new CreateOrderSagaState(ORDER_ID,
                            CHICKEN_VINDALOO_ORDER_DETAILS)).
    expect().                                                          2
         command(new ValidateOrderByConsumer(CONSUMER_ID, ORDER_ID,
                CHICKEN_VINDALOO_ORDER_TOTAL)).
        to(ConsumerServiceChannels.consumerServiceChannel).
    andGiven().
        successReply().                                                3
     expect().
          command(new CreateTicket(AJANTA_ID, ORDER_ID, null)).        4
           to(KitchenServiceChannels.kitchenServiceChannel);
  }

  @Test
  public void shouldRejectOrderDueToConsumerVerificationFailed() {
    given()
        .saga(new CreateOrderSaga(kitchenServiceProxy),
                new CreateOrderSagaState(ORDER_ID,
                           CHICKEN_VINDALOO_ORDER_DETAILS)).
    expect().
        command(new ValidateOrderByConsumer(CONSUMER_ID, ORDER_ID,
                CHICKEN_VINDALOO_ORDER_TOTAL)).
        to(ConsumerServiceChannels.consumerServiceChannel).
    andGiven().
        failureReply().                                                5
     expect().
        command(new RejectOrderCommand(ORDER_ID)).
        to(OrderServiceChannels.orderServiceChannel);                  6
   }

}
public class CreateOrderSagaTest {

  @Test
  public void shouldCreateOrder() {
    given()
        .saga(new CreateOrderSaga(kitchenServiceProxy),                1
                 new CreateOrderSagaState(ORDER_ID,
                            CHICKEN_VINDALOO_ORDER_DETAILS)).
    expect().                                                          2
         command(new ValidateOrderByConsumer(CONSUMER_ID, ORDER_ID,
                CHICKEN_VINDALOO_ORDER_TOTAL)).
        to(ConsumerServiceChannels.consumerServiceChannel).
    andGiven().
        successReply().                                                3
     expect().
          command(new CreateTicket(AJANTA_ID, ORDER_ID, null)).        4
           to(KitchenServiceChannels.kitchenServiceChannel);
  }

  @Test
  public void shouldRejectOrderDueToConsumerVerificationFailed() {
    given()
        .saga(new CreateOrderSaga(kitchenServiceProxy),
                new CreateOrderSagaState(ORDER_ID,
                           CHICKEN_VINDALOO_ORDER_DETAILS)).
    expect().
        command(new ValidateOrderByConsumer(CONSUMER_ID, ORDER_ID,
                CHICKEN_VINDALOO_ORDER_TOTAL)).
        to(ConsumerServiceChannels.consumerServiceChannel).
    andGiven().
        failureReply().                                                5
     expect().
        command(new RejectOrderCommand(ORDER_ID)).
        to(OrderServiceChannels.orderServiceChannel);                  6
   }

}

  • 1 创建 saga。
  • 1 Create the saga.
  • 2 验证它是否向 Consumer Service 发送了 ValidateOrderByConsumer 消息。
  • 2 Verify that it sends a ValidateOrderByConsumer message to Consumer Service.
  • 3 向该消息发送 Success 回复。
  • 3 Send a Success reply to that message.
  • 4 验证它是否向 Kitchen 服务发送 CreateTicket 消息。
  • 4 Verify that it sends a CreateTicket message to Kitchen Service.
  • 5 发送失败回复,指示 Consumer Service 拒绝了 Order。
  • 5 Send a failure reply indicating that Consumer Service rejected Order.
  • 6 验证 saga 是否向 Order Service 发送 RejectOrderCommand 消息。
  • 6 Verify that the saga sends a RejectOrderCommand message to Order Service.

该方法测试快乐路径。该方法测试拒绝订单的场景。它验证是否发送 a 来补偿被拒绝的使用者。该类具有测试其他故障场景的方法。@Test shouldCreateOrder()@Test shouldRejectOrderDueToConsumerVerificationFailed()Consumer ServiceCreateOrderSagaRejectOrderCommandCreateOrderSagaTest

The @Test shouldCreateOrder() method tests the happy path. The @Test shouldRejectOrderDueToConsumerVerificationFailed() method tests the scenario where Consumer Service rejects the order. It verifies that CreateOrderSaga sends a RejectOrderCommand to compensate for the consumer being rejected. The CreateOrderSagaTest class has methods that test other failure scenarios.

现在让我们看看如何测试域服务。

Let’s now look at how to test domain services.

9.2.4. 为域服务编写单元测试

9.2.4. Writing unit tests for domain services

服务的大部分业务逻辑由实体、值对象和 saga 实现。域服务类、 例如类,实现 remainder。此类是典型的域服务类。它的方法调用实体和存储库 以及发布域事件。测试此类的一种有效方法是使用一个几乎单独的单元测试,它模拟 依赖项,例如存储库和消息传递类。OrderService

The majority of a service’s business logic is implemented by the entities, value objects, and sagas. Domain service classes, such as the OrderService class, implement the remainder. This class is a typical domain service class. Its methods invoke entities and repositories and publish domain events. An effective way to test this kind of class is to use a mostly solitary unit test, which mocks dependencies such as repositories and messaging classes.

清单 9.5 显示了测试的类。它定义了单独的单元测试,这些测试使用 Mockito 模拟来表示服务的依赖项。每个测试都实现了测试阶段 如下:OrderServiceTestOrderService

Listing 9.5 shows the OrderServiceTest class, which tests OrderService. It defines solitary unit tests, which use Mockito mocks for the service’s dependencies. Each test implements the test phases as follows:

  1. 设置 - 为服务的依赖项配置 mock 对象
  2. SetupConfigures the mock objects for the service’s dependencies
  3. 执行 - 调用服务方法
  4. ExecuteInvokes a service method
  5. 验证 - 验证 service 方法返回的值是否正确,以及是否已正确调用依赖项
  6. VerifyVerifies that the value returned by the service method is correct and that the dependencies have been invoked correctly
清单 9.5.类的简单、快速运行的单元测试OrderService
public class OrderServiceTest {

  private OrderService orderService;
  private OrderRepository orderRepository;
  private DomainEventPublisher eventPublisher;
  private RestaurantRepository restaurantRepository;
  private SagaManager<CreateOrderSagaState> createOrderSagaManager;
  private SagaManager<CancelOrderSagaData> cancelOrderSagaManager;
  private SagaManager<ReviseOrderSagaData> reviseOrderSagaManager;

  @Before
  public void setup() {
    orderRepository = mock(OrderRepository.class);                         1
     eventPublisher = mock(DomainEventPublisher.class);
    restaurantRepository = mock(RestaurantRepository.class);
    createOrderSagaManager = mock(SagaManager.class);
    cancelOrderSagaManager = mock(SagaManager.class);
    reviseOrderSagaManager = mock(SagaManager.class);
    orderService = new OrderService(orderRepository, eventPublisher,       2
             restaurantRepository, createOrderSagaManager,
            cancelOrderSagaManager, reviseOrderSagaManager);
  }


  @Test
  public void shouldCreateOrder() {
    when(restaurantRepository                                              3
       .findById(AJANTA_ID)).thenReturn(Optional.of(AJANTA_RESTAURANT_);
    when(orderRepository.save(any(Order.class))).then(invocation -> {      4
       Order order = (Order) invocation.getArguments()[0];
      order.setId(ORDER_ID);
      return order;
    });

    Order order = orderService.createOrder(CONSUMER_ID,                    5
                     AJANTA_ID, CHICKEN_VINDALOO_MENU_ITEMS_AND_QUANTITIES);

    verify(orderRepository).save(same(order));                             6

    verify(eventPublisher).publish(Order.class, ORDER_ID,                  7
             singletonList(
                 new OrderCreatedEvent(CHICKEN_VINDALOO_ORDER_DETAILS)));

    verify(createOrderSagaManager)                                         8
           .create(new CreateOrderSagaState(ORDER_ID,
                       CHICKEN_VINDALOO_ORDER_DETAILS),
                  Order.class, ORDER_ID);
  }

}
public class OrderServiceTest {

  private OrderService orderService;
  private OrderRepository orderRepository;
  private DomainEventPublisher eventPublisher;
  private RestaurantRepository restaurantRepository;
  private SagaManager<CreateOrderSagaState> createOrderSagaManager;
  private SagaManager<CancelOrderSagaData> cancelOrderSagaManager;
  private SagaManager<ReviseOrderSagaData> reviseOrderSagaManager;

  @Before
  public void setup() {
    orderRepository = mock(OrderRepository.class);                         1
     eventPublisher = mock(DomainEventPublisher.class);
    restaurantRepository = mock(RestaurantRepository.class);
    createOrderSagaManager = mock(SagaManager.class);
    cancelOrderSagaManager = mock(SagaManager.class);
    reviseOrderSagaManager = mock(SagaManager.class);
    orderService = new OrderService(orderRepository, eventPublisher,       2
             restaurantRepository, createOrderSagaManager,
            cancelOrderSagaManager, reviseOrderSagaManager);
  }


  @Test
  public void shouldCreateOrder() {
    when(restaurantRepository                                              3
       .findById(AJANTA_ID)).thenReturn(Optional.of(AJANTA_RESTAURANT_);
    when(orderRepository.save(any(Order.class))).then(invocation -> {      4
       Order order = (Order) invocation.getArguments()[0];
      order.setId(ORDER_ID);
      return order;
    });

    Order order = orderService.createOrder(CONSUMER_ID,                    5
                     AJANTA_ID, CHICKEN_VINDALOO_MENU_ITEMS_AND_QUANTITIES);

    verify(orderRepository).save(same(order));                             6

    verify(eventPublisher).publish(Order.class, ORDER_ID,                  7
             singletonList(
                 new OrderCreatedEvent(CHICKEN_VINDALOO_ORDER_DETAILS)));

    verify(createOrderSagaManager)                                         8
           .create(new CreateOrderSagaState(ORDER_ID,
                       CHICKEN_VINDALOO_ORDER_DETAILS),
                  Order.class, ORDER_ID);
  }

}

  • 1 为 OrderService 的依赖项创建 Mockito mock。
  • 1 Create Mockito mocks for OrderService’s dependencies.
  • 2 创建一个注入了 mock 依赖项的 OrderService。
  • 2 Create an OrderService injected with mock dependencies.
  • 3 配置 RestaurantRepository.findById() 以返回 Ajanta 餐厅。
  • 3 Configure RestaurantRepository.findById() to return the Ajanta restaurant.
  • 4 配置 OrderRepository.save() 以设置 Order 的 ID。
  • 4 Configure OrderRepository.save() to set Order’s ID.
  • 5 调用 OrderService.create()。
  • 5 Invoke OrderService.create().
  • 6 验证 OrderService 是否将新创建的 Order 保存在数据库中。
  • 6 Verify that OrderService saved the newly created Order in the database.
  • 7 验证 OrderService 是否发布了 OrderCreatedEvent。
  • 7 Verify that OrderService published an OrderCreatedEvent.
  • 8 验证 OrderService 是否创建了 CreateOrderSaga。
  • 8 Verify that OrderService created a CreateOrderSaga.

该方法创建一个 injected with mock 依赖项。该方法验证调用以保存新创建的 、发布 、 并创建一个 .setUp()OrderService@Test shouldCreateOrder()OrderService.createOrder()OrderRepositoryOrderOrderCreatedEventCreateOrderSaga

The setUp() method creates an OrderService injected with mock dependencies. The @Test shouldCreateOrder() method verifies that OrderService.createOrder() invokes OrderRepository to save the newly created Order, publishes an OrderCreatedEvent, and creates a CreateOrderSaga.

现在我们已经了解了如何对域逻辑类进行单元测试,让我们看看如何对与 外部系统。

Now that we’ve seen how to unit test the domain logic classes, let’s look at how to unit test the adapters that interact with external systems.

9.2.5. 为控制器开发单元测试

9.2.5. Developing unit tests for controllers

服务(如 )通常具有一个或多个控制器,用于处理来自其他服务和 API Gateway 的 HTTP 请求。控制器类 由一组请求处理程序方法组成。每种方法都实现一个 REST API 端点。方法的参数表示 HTTP 请求中的值,例如路径变量。它通常调用域服务或存储库,并返回一个 response 对象。,例如,调用 和 。控制器的一种有效测试策略是模拟服务和存储库的单独单元测试。Order ServiceOrderControllerOrderServiceOrderRepository

Services, such as Order Service, typically have one or more controllers that handle HTTP requests from other services and the API gateway. A controller class consists of a set of request handler methods. Each method implements a REST API endpoint. A method’s parameters represent values from the HTTP request, such as path variables. It typically invokes a domain service or a repository and returns a response object. OrderController, for instance, invokes OrderService and OrderRepository. An effective testing strategy for controllers is solitary unit tests that mock the services and repositories.

您可以编写一个类似于该类的测试类来实例化控制器类并调用其方法。但是这种方法不会测试一些重要的功能, 例如请求路由。使用模拟 MVC 测试框架要有效得多,例如 Spring Mock Mvc,它是 Spring Framework 或 Rest Assured Mock MVC,它基于 Spring Mock Mvc 构建。使用这些框架之一编写的测试 发出看似 HTTP 的请求,并对 HTTP 响应做出断言。这些框架使您能够测试 HTTP 请求 将 Java 对象路由到 JSON 或从 JSON 进行转换,而无需进行真正的网络调用。在被子下, Spring Mock Mvc 实例化了足够多的 Spring MVC 类来实现这一目标。OrderServiceTest

You could write a test class similar to the OrderServiceTest class to instantiate a controller class and invoke its methods. But this approach doesn’t test some important functionality, such as request routing. It’s much more effective to use a mock MVC testing framework, such as Spring Mock Mvc, which is part of the Spring Framework, or Rest Assured Mock MVC, which builds on Spring Mock Mvc. Tests written using one of these frameworks make what appear to be HTTP requests and make assertions about HTTP responses. These frameworks enable you to test HTTP request routing and conversion of Java objects to and from JSON without having to make real network calls. Under the covers, Spring Mock Mvc instantiates just enough of the Spring MVC classes to make this possible.

这些真的是单元测试吗?

因为这些测试使用 Spring Framework,所以你可能会争辩说它们不是单元测试。他们肯定更重量级 比我到目前为止描述的单元测试要多。Spring Mock Mvc 文档将这些称为 servlet 容器外集成 测试 (https://docs.spring.io/spring/docs/current/spring-framework-reference/testing.html#spring-mvc-test-vs-end-to-end-integration-tests)。然而,请放心,Mock MVC 将这些测试描述为单元测试 (https://github.com/rest-assured/rest-assured/wiki/Usage#spring-mock-mvc-module)。无论术语的争论如何,这些都是需要编写的重要测试。

Because these tests use the Spring Framework, you might argue that they’re not unit tests. They’re certainly more heavyweight than the unit tests I’ve described so far. The Spring Mock Mvc documentation refers to these as out-of-servlet-container integration tests (https://docs.spring.io/spring/docs/current/spring-framework-reference/testing.html#spring-mvc-test-vs-end-to-end-integration-tests). Yet Rest Assured Mock MVC describes these tests as unit tests (https://github.com/rest-assured/rest-assured/wiki/Usage#spring-mock-mvc-module). Regardless of the debate over terminology, these are important tests to write.

清单 9.6 显示了该类,该类测试了 的 .它定义了对 的依赖项使用 mock 的单独单元测试。它是使用 Rest Assured Mock MVC 编写的,它提供了一个简单的 DSL,用于抽象出 与控制器交互。Rest Assured 可以轻松地向控制器发送模拟 HTTP 请求并验证响应。 创建一个控制器,该控制器注入了 和 的 Mockito 模拟。每个测试都会配置 mocks,发出 HTTP 请求,验证响应是否正确,并可能验证 控制器调用了 mock。OrderControllerTestOrder ServiceOrderControllerOrderControllerOrderControllerTestOrderServiceOrderRepository

Listing 9.6 shows the OrderControllerTest class, which tests Order Service’s OrderController. It defines solitary unit tests that use mocks for OrderController’s dependencies. It’s written using Rest Assured Mock MVC, which provides a simple DSL that abstracts away the details of interacting with controllers. Rest Assured makes it easy to send a mock HTTP request to a controller and verify the response. OrderControllerTest creates a controller that’s injected with Mockito mocks for OrderService and OrderRepository. Each test configures the mocks, makes an HTTP request, verifies that the response is correct, and possibly verifies that the controller invoked the mocks.

清单 9.6.类的简单、快速运行的单元测试OrderController
public class OrderControllerTest {

  private OrderService orderService;
  private OrderRepository orderRepository;

  @Before
  public void setUp() throws Exception {
    orderService = mock(OrderService.class);                            1
    orderRepository = mock(OrderRepository.class);
    orderController = new OrderController(orderService, orderRepository);
  }


  @Test
  public void shouldFindOrder() {

    when(orderRepository.findById(1L))
          .thenReturn(Optional.of(CHICKEN_VINDALOO_ORDER_);             2

    given().
      standaloneSetup(configureControllers(                             3
               new OrderController(orderService, orderRepository))).
    when().
            get("/orders/1").                                           4
    then().
      statusCode(200).                                                  5
       body("orderId",                                                  6
            equalTo(new Long(OrderDetailsMother.ORDER_ID).intValue())).
      body("state",
           equalTo(OrderDetailsMother.CHICKEN_VINDALOO_ORDER_STATE.name())).
      body("orderTotal",
          equalTo(CHICKEN_VINDALOO_ORDER_TOTAL.asString()))
    ;
  }

  @Test
  public void shouldFindNotOrder() { ... }

  private StandaloneMockMvcBuilder controllers(Object... controllers) { ... }

}
public class OrderControllerTest {

  private OrderService orderService;
  private OrderRepository orderRepository;

  @Before
  public void setUp() throws Exception {
    orderService = mock(OrderService.class);                            1
    orderRepository = mock(OrderRepository.class);
    orderController = new OrderController(orderService, orderRepository);
  }


  @Test
  public void shouldFindOrder() {

    when(orderRepository.findById(1L))
          .thenReturn(Optional.of(CHICKEN_VINDALOO_ORDER_);             2

    given().
      standaloneSetup(configureControllers(                             3
               new OrderController(orderService, orderRepository))).
    when().
            get("/orders/1").                                           4
    then().
      statusCode(200).                                                  5
       body("orderId",                                                  6
            equalTo(new Long(OrderDetailsMother.ORDER_ID).intValue())).
      body("state",
           equalTo(OrderDetailsMother.CHICKEN_VINDALOO_ORDER_STATE.name())).
      body("orderTotal",
          equalTo(CHICKEN_VINDALOO_ORDER_TOTAL.asString()))
    ;
  }

  @Test
  public void shouldFindNotOrder() { ... }

  private StandaloneMockMvcBuilder controllers(Object... controllers) { ... }

}

  • 1 为 OrderController 的依赖项创建 mock。
  • 1 Create mocks for OrderController’s dependencies.
  • 2 配置模拟 OrderRepository 以返回 Order。
  • 2 Configure the mock OrderRepository to return an Order.
  • 3 配置 OrderController。
  • 3 Configure OrderController.
  • 4 发出 HTTP 请求。
  • 4 Make an HTTP request.
  • 5 验证响应状态代码。
  • 5 Verify the response status code.
  • 6 验证 JSON 响应正文的元素。
  • 6 Verify elements of the JSON response body.

测试方法首先将 mock 配置为返回一个 .然后,它会发出 HTTP 请求来检索订单。最后,它会检查请求是否成功,以及响应 body 包含预期的数据。shouldFindOrder()OrderRepositoryOrder

The shouldFindOrder() test method first configures the OrderRepository mock to return an Order. It then makes an HTTP request to retrieve the order. Finally, it checks that the request was successful and that the response body contains the expected data.

控制器并不是处理来自外部系统的请求的唯一适配器。还有事件/消息处理程序,因此 让我们谈谈如何对它们进行单元测试。

Controllers aren’t the only adapters that handle requests from external systems. There are also event/message handlers, so let’s talk about how to unit test those.

9.2.6. 为事件和消息处理程序编写单元测试

9.2.6. Writing unit tests for event and message handlers

服务通常处理外部系统发送的消息。,例如,has 是一个消息适配器,用于处理其他服务发布的域事件。与控制器一样,消息适配器倾向于 设置为调用域服务的简单类。消息适配器的每个方法通常使用 来自消息或事件的数据。Order ServiceOrderEventConsumer

Services often process messages sent by external systems. Order Service, for example, has OrderEventConsumer, which is a message adapter that handles domain events published by other services. Like controllers, message adapters tend to be simple classes that invoke domain services. Each of a message adapter’s methods typically invokes a service method with data from the message or event.

我们可以使用类似于对控制器进行单元测试的方法对消息适配器进行单元测试。每个测试实例 消息适配器将消息发送到通道,并验证是否正确调用了服务模拟。幕后花絮 但是,消息传递基础设施是存根的,因此不涉及 Message Broker。让我们看看如何测试该类。OrderEventConsumer

We can unit test message adapters using an approach similar to the one we used for unit testing controllers. Each test instances the message adapter, sends a message to a channel, and verifies that the service mock was invoked correctly. Behind the scenes, though, the messaging infrastructure is stubbed, so no message broker is involved. Let’s look at how to test the OrderEventConsumer class.

清单 9.7 显示了该类的一部分,它测试 .它验证是否将每个事件路由到相应的处理程序方法,并正确调用 .该测试使用 Eventuate Tram Mock Messaging 框架,该框架提供了一个易于使用的 DSL 来编写模拟消息传递 测试,该测试使用与 Rest Assure 相同的 given-when-then 格式。每个测试实例化注入 mock ,发布域事件,并验证是否正确调用了服务 mock。OrderEventConsumerTestOrderEventConsumerOrderEventConsumerOrderServiceOrderEventConsumerOrderServiceOrderEventConsumer

Listing 9.7 shows part of the OrderEventConsumerTest class, which tests OrderEventConsumer. It verifies that OrderEventConsumer routes each event to the appropriate handler method and correctly invokes OrderService. The test uses the Eventuate Tram Mock Messaging framework, which provides an easy-to-use DSL for writing mock messaging tests that uses the same given-when-then format as Rest Assured. Each test instantiates OrderEventConsumer injected with a mock OrderService, publishes a domain event, and verifies that OrderEventConsumer correctly invokes the service mock.

清单 9.7.类的快速运行单元测试OrderEventConsumer
public class OrderEventConsumerTest {

  private OrderService orderService;
  private OrderEventConsumer orderEventConsumer;

  @Before
  public void setUp() throws Exception {
    orderService = mock(OrderService.class);
    orderEventConsumer = new OrderEventConsumer(orderService);            1
  }

  @Test
  public void shouldCreateMenu() {

    given().
            eventHandlers(orderEventConsumer.domainEventHandlers()).      2
    when().
      aggregate("net.chrisrichardson.ftgo.restaurantservice.domain.Restaurant",
                AJANTA_ID).
      publishes(new RestaurantCreated(AJANTA_RESTAURANT_NAME,             3
                          RestaurantMother.AJANTA_RESTAURANT_MENU))
    then().
       verify(() -> {                                                     4
          verify(orderService)
                .createMenu(AJANTA_ID,
            new RestaurantMenu(RestaurantMother.AJANTA_RESTAURANT_MENU_ITEMS));
       })
    ;
  }

}
public class OrderEventConsumerTest {

  private OrderService orderService;
  private OrderEventConsumer orderEventConsumer;

  @Before
  public void setUp() throws Exception {
    orderService = mock(OrderService.class);
    orderEventConsumer = new OrderEventConsumer(orderService);            1
  }

  @Test
  public void shouldCreateMenu() {

    given().
            eventHandlers(orderEventConsumer.domainEventHandlers()).      2
    when().
      aggregate("net.chrisrichardson.ftgo.restaurantservice.domain.Restaurant",
                AJANTA_ID).
      publishes(new RestaurantCreated(AJANTA_RESTAURANT_NAME,             3
                          RestaurantMother.AJANTA_RESTAURANT_MENU))
    then().
       verify(() -> {                                                     4
          verify(orderService)
                .createMenu(AJANTA_ID,
            new RestaurantMenu(RestaurantMother.AJANTA_RESTAURANT_MENU_ITEMS));
       })
    ;
  }

}

  • 1 使用模拟依赖项实例化 OrderEventConsumer。
  • 1 Instantiate OrderEventConsumer with mocked dependencies.
  • 2 配置 OrderEventConsumer 域处理程序。
  • 2 Configure OrderEventConsumer domain handlers.
  • 3 发布 RestaurantCreated 事件。
  • 3 Publish a RestaurantCreated event.
  • 4 验证 OrderEventConsumer 是否调用了 OrderService.createMenu()。
  • 4 Verify that OrderEventConsumer invoked OrderService.createMenu().

该方法创建一个带有 mock 的 injected 。该方法发布一个事件并验证调用了 .该类和其他单元测试类的执行速度非常快。单元测试只需几秒钟即可运行。setUp()OrderEventConsumerOrderServiceshouldCreateMenu()RestaurantCreatedOrderEventConsumerOrderService.createMenu()OrderEventConsumerTest

The setUp() method creates an OrderEventConsumer injected with a mock OrderService. The shouldCreateMenu() method publishes a RestaurantCreated event and verifies that OrderEventConsumer invoked OrderService.createMenu(). The OrderEventConsumerTest class and the other unit test classes execute extremely quickly. The unit tests run in just a few seconds.

但是,单元测试不会验证服务(如 )是否与其他服务正确交互。例如,单元测试不会验证 an 是否可以在 MySQL 中持久保存。他们也不会验证是否将正确格式的命令消息发送到正确的消息通道。而且,它们不会验证 处理的事件是否与 发布的事件具有相同的结构。为了验证服务是否与其他服务正确交互,我们必须编写集成测试。我们还需要 编写组件测试,以隔离测试整个服务。下一章将讨论如何执行这些类型的 测试以及端到端测试。Order ServiceOrderCreateOrderSagaRestaurantCreatedOrderEventConsumerRestaurant Service

But the unit tests don’t verify that a service, such as Order Service, properly interacts with other services. For example, the unit tests don’t verify that an Order can be persisted in MySQL. Nor do they verify that CreateOrderSaga sends command messages in the right format to the right message channel. And they don’t verify that the RestaurantCreated event processed by OrderEventConsumer has the same structure as the event published by Restaurant Service. In order to verify that a service properly interacts with other services, we must write integration tests. We also need to write component tests that test an entire service in isolation. The next chapter discusses how to conduct those types of tests, as well as end-to-end tests.

总结

Summary

  • 自动化测试是快速、安全地交付软件的关键基础。更重要的是,由于其固有的复杂性, 要充分利用微服务架构,您必须自动执行测试。
  • Automated testing is the key foundation of rapid, safe delivery of software. What’s more, because of its inherent complexity, to fully benefit from the microservice architecture you must automate your tests.
  • 测试的目的是验证受测系统 (SUT) 的行为。在这个定义中,system 是一个花哨的术语,表示被测试的软件元素。它可能小到类,大到 整个应用程序,或者介于两者之间,例如类集群或单个服务。相关 测试形成一个测试套件。
  • The purpose of a test is to verify the behavior of the system under test (SUT). In this definition, system is a fancy term that means the software element being tested. It might be something as small as a class, as large as the entire application, or something in between, such as a cluster of classes or an individual service. A collection of related tests form a test suite.
  • 简化和加快测试的一个好方法是使用 test doubles。测试替身是模拟行为的对象 SUT 的依赖项。有两种类型的测试替身:存根和模拟。存根是将值返回为 SUT。mock 是测试用来验证 SUT 是否正确调用依赖项的测试替身。
  • A good way to simplify and speed up a test is to use test doubles. A test double is an object that simulates the behavior of a SUT’s dependency. There are two types of test doubles: stubs and mocks. A stub is a test double that returns values to the SUT. A mock is a test double that a test uses to verify that the SUT correctly invokes a dependency.
  • 使用测试金字塔来确定将服务测试工作的重点放在何处。大多数测试都应该 快速、可靠且易于编写的单元测试。您必须尽量减少端到端测试的数量,因为它们速度慢、易碎、 而且写起来很费时间。
  • Use the test pyramid to determine where to focus your testing efforts for your services. The majority of your tests should be fast, reliable, and easy-to-write unit tests. You must minimize the number of end-to-end tests, because they’re slow, brittle, and time consuming to write.

第 10 章.测试微服务:第 2 部分

Chapter 10. Testing microservices: Part 2

本章涵盖

This chapter covers

  • 隔离测试服务的技术
  • Techniques for testing services in isolation
  • 使用消费者驱动的契约测试来编写测试,以快速而可靠地验证服务间通信
  • Using consumer-driven contract testing to write tests that quickly yet reliably verify interservice communication
  • 何时以及如何对应用程序进行端到端测试
  • When and how to do end-to-end testing of applications

本章以上一章为基础,上一章介绍了测试概念,包括测试金字塔。测试棱锥体描述您应该编写的不同类型的测试的相对比例。上一章介绍了 如何编写单元测试,这些测试位于测试金字塔的底部。在本章中,我们将继续我们的测试之旅 金字塔。

This chapter builds on the previous chapter, which introduced testing concepts, including the test pyramid. The test pyramid describes the relative proportions of the different types of tests that you should write. The previous chapter described how to write unit tests, which are at the base of the testing pyramid. In this chapter, we continue our ascent of the testing pyramid.

本章首先介绍如何编写集成测试,集成测试是测试金字塔中单元测试的上方级别。集成测试验证服务是否可以与基础设施服务(如数据库和其他应用程序服务)正确交互。 接下来,我将介绍组件测试,即服务的验收测试。组件测试通过使用对服务的依赖项使用存根来隔离测试服务。 之后,我将介绍如何编写端到端测试,这些测试测试一组服务或整个应用程序。端到端测试位于测试金字塔的顶部,因此应谨慎使用。

This chapter begins with how to write integration tests, which are the level above unit tests in the testing pyramid. Integration tests verify that a service can properly interact with infrastructure services, such as databases, and other application services. Next, I cover component tests, which are acceptance tests for services. A component test tests a service in isolation by using stubs for its dependencies. After that, I describe how to write end-to-end tests, which test a group of services or the entire application. End-to-end tests are at the top of the test pyramid and should, therefore, be used sparingly.

让我们首先看看如何编写集成测试。

Let’s start by taking a look at how to write integration tests.

10.1. 编写集成测试

10.1. Writing integration tests

服务通常与其他服务交互。例如,如图 10.1 所示,它与多个服务交互。其 REST API 由 使用,其域事件由服务使用,包括 . 使用其他几个服务。它在 MySQL 中持续存在。它还向其他几个服务发送命令并使用来自其他几个服务的回复,例如 .Order ServiceAPI GatewayOrder History ServiceOrder ServiceOrdersKitchen Service

Services typically interact with other services. For example, Order Service, as figure 10.1 shows, interacts with several services. Its REST API is consumed by API Gateway, and its domain events are consumed by services, including Order History Service. Order Service uses several other services. It persists Orders in MySQL. It also sends commands to and consumes replies from several other services, such as Kitchen Service.

图 10.1.集成测试必须验证服务是否可以与其客户端和依赖项通信。但与其测试整个 服务,该策略是测试实现通信的各个适配器类。

为了确信诸如 这样的服务按预期工作,我们必须编写测试来验证该服务是否可以正确地与基础设施服务交互,并且 其他应用程序服务。一种方法是启动所有服务并通过其 API 对其进行测试。然而,这是 即所谓的端到端测试,它缓慢、脆弱且成本高昂。如 10.3 节所述,端到端测试有时会发挥作用,但它位于测试金字塔的顶部,因此您希望最小化数量 的端到端测试。Order Service

In order to be confident that a service such as Order Service works as expected, we must write tests that verify that the service can properly interact with infrastructure services and other application services. One approach is to launch all the services and test them through their APIs. This, however, is what’s known as end-to-end testing, which is slow, brittle, and costly. As explained in section 10.3, there’s a role for end-to-end testing sometimes, but it’s at the top of the test pyramid, so you want to minimize the number of end-to-end tests.

一个更有效的策略是编写所谓的集成测试。如图 10.2 所示,集成测试是测试金字塔中单元测试的上方层。它们验证服务是否可以正确交互 基础设施服务和其他服务。但与端到端测试不同的是,它们不会启动服务。相反,我们使用 一些策略可以显著简化测试,而不会影响其有效性。

A much more effective strategy is to write what are known as integration tests. As figure 10.2 shows, integration tests are the layer above unit tests in the testing pyramid. They verify that a service can properly interact with infrastructure services and other services. But unlike end-to-end tests, they don’t launch services. Instead, we use a couple of strategies that significantly simplify the tests without impacting their effectiveness.

图 10.2.集成测试是单元测试之上的层。它们验证服务是否可以与其依赖项通信,而 包括基础设施服务(如数据库)和应用程序服务。

第一种策略是测试服务的每个适配器,可能还有适配器的支持类。例如 在 Section 10.1.1 中,您将看到一个 JPA 持久性测试,用于验证是否正确持久化。它不是通过 的 API 测试持久性,而是直接测试类。同样,在 10.1.3 节中,您将看到一个测试,该测试通过测试类来验证是否发布了结构正确的域事件。仅测试少量类而不是整个服务的好处是,测试非常明显 更简单、更快捷。OrdersOrder ServiceOrderRepositoryOrder ServiceOrderDomainEventPublisher

The first strategy is to test each of the service’s adapters, along with, perhaps, the adapter’s supporting classes. For example, in section 10.1.1 you’ll see a JPA persistence test that verifies that Orders are persisted correctly. Rather than test persistence through Order Service’s API, it directly tests the OrderRepository class. Similarly, in section 10.1.3 you’ll see a test that verifies that Order Service publishes correctly structured domain events by testing the OrderDomainEventPublisher class. The benefit of testing only a small number of classes rather than the entire service is that the tests are significantly simpler and faster.

简化验证应用程序服务之间交互的集成测试的第二种策略是使用 Contract 在第 9 章中讨论。Contract 是一对服务之间交互的一个具体示例。如 表 10.1 所示,Contract 的结构取决于服务之间交互的类型。

The second strategy for simplifying integration tests that verify interactions between application services is to use contracts, discussed in chapter 9. A contract is a concrete example of an interaction between a pair of services. As table 10.1 shows, the structure of a contract depends on the type of interaction between the services.

表 10.1.合同的结构取决于服务之间的交互类型。

交互方式

Interaction style

消费者

Consumer

供应商

Provider

合同

Contract

基于 REST 的请求/响应 API 网关 订购服务 HTTP 请求和响应
发布/订阅 订单历史记录服务 订购服务 域事件
异步请求/响应 订购服务 厨房服务 命令消息和回复消息

协定由一条消息(如果是发布/订阅样式交互)或两条消息(如果是 请求/响应和异步请求/响应样式的交互。

A contract consists of either one message, in the case of publish/subscribe style interactions, or two messages, in the case of request/response and asynchronous request/response style interactions.

这些 Contract 用于测试使用者和提供者,从而确保它们就 API 达成一致。他们被利用 方式略有不同,具体取决于您是测试消费者还是提供者:

The contracts are used to test both the consumer and the provider, which ensures that they agree on the API. They’re used in slightly different ways depending on whether you’re testing the consumer or the provider:

  • 消费者端测试这些是针对使用者适配器的测试。他们使用 Contract 来配置模拟提供者的存根,从而启用 您可以为不需要正在运行的提供程序的使用者编写集成测试。
  • Consumer-side testsThese are tests for the consumer’s adapter. They use the contracts to configure stubs that simulate the provider, enabling you to write integration tests for a consumer that don’t require a running provider.
  • 提供商端测试这些是针对提供程序适配器的测试。他们使用 Contract 来测试适配器,并使用 mock 来测试适配器的依赖项。
  • Provider-side testsThese are tests for the provider’s adapter. They use the contracts to test the adapters using mocks for the adapters’s dependencies.

在本节的后面部分,我将介绍这些类型测试的示例,但首先让我们看看如何编写持久性测试。

Later in this section, I describe examples of these types of tests—but first let’s look at how to write persistence tests.

10.1.1. 持久化集成测试

10.1.1. Persistence integration tests

服务通常将数据存储在数据库中。例如,使用 JPA 在 MySQL 中持久保存聚合(如 )。同样,在 AWS DynamoDB 中维护 CQRS 视图。我们之前编写的单元测试仅测试内存中的对象。为了自信 服务正常工作时,我们必须编写持久化集成测试,以验证服务的数据库访问 逻辑按预期工作。对于 ,这意味着测试 JPA 存储库,例如 .Order ServiceOrderOrder History ServiceOrder ServiceOrderRepository

Services typically store data in a database. For instance, Order Service persists aggregates, such as Order, in MySQL using JPA. Similarly, Order History Service maintains a CQRS view in AWS DynamoDB. The unit tests we wrote earlier only test in-memory objects. In order to be confident that a service works correctly, we must write persistence integration tests, which verify that a service’s database access logic works as expected. In the case of Order Service, this means testing the JPA repositories, such as OrderRepository.

持久性集成测试的每个阶段的行为如下:

Each phase of a persistence integration test behaves as follows:

  • 设置 - 通过创建数据库架构并将其初始化为已知状态来设置数据库。它还可能开始数据库事务。
  • SetupSet up the database by creating the database schema and initializing it to a known state. It might also begin a database transaction.
  • 执行 - 执行数据库操作。
  • ExecutePerform a database operation.
  • 验证 - 对数据库的状态和从数据库中检索的对象进行断言。
  • VerifyMake assertions about the state of the database and objects retrieved from the database.
  • 拆解一个可选阶段,该阶段可能会撤消对数据库所做的更改,例如,回滚 由 Setup 阶段启动。
  • TeardownAn optional phase that might undo the changes made to the database by, for example, rolling back the transaction that was started by the setup phase.

清单 10.1 显示了聚合 和 的持久集成测试。除了依赖 JPA 创建数据库架构之外,持久性集成测试不会对 数据库的状态。因此,测试不需要回滚它们对数据库所做的更改,从而避免了 ORM 在内存中缓存数据更改的问题。OrderOrderRepository

Listing 10.1 shows a persistent integration test for the Order aggregate and OrderRepository. Apart from relying on JPA to create the database schema, the persistence integration tests don’t make any assumption about the state of the database. Consequently, tests don’t need to roll back the changes they make to the database, which avoids problems with the ORM caching data changes in memory.

清单 10.1.验证 是否可以持久化 的集成测试Order
@RunWith(SpringRunner.class)
@SpringBootTest(classes = OrderJpaTestConfiguration.class)
public class OrderJpaTest {

  @Autowired
  private OrderRepository orderRepository;

  @Autowired
  private TransactionTemplate transactionTemplate;

  @Test
  public void shouldSaveAndLoadOrder() {

    Long orderId = transactionTemplate.execute((ts) -> {
      Order order =
              new Order(CONSUMER_ID, AJANTA_ID, CHICKEN_VINDALOO_LINE_ITEMS);
      orderRepository.save(order);
      return order.getId();
    });

    transactionTemplate.execute((ts) -> {
      Order order = orderRepository.findById(orderId).get();

      assertEquals(OrderState.APPROVAL_PENDING, order.getState());
      assertEquals(AJANTA_ID, order.getRestaurantId());
      assertEquals(CONSUMER_ID, order.getConsumerId().longValue());
      assertEquals(CHICKEN_VINDALOO_LINE_ITEMS, order.getLineItems());
      return null;
    });

  }

}
@RunWith(SpringRunner.class)
@SpringBootTest(classes = OrderJpaTestConfiguration.class)
public class OrderJpaTest {

  @Autowired
  private OrderRepository orderRepository;

  @Autowired
  private TransactionTemplate transactionTemplate;

  @Test
  public void shouldSaveAndLoadOrder() {

    Long orderId = transactionTemplate.execute((ts) -> {
      Order order =
              new Order(CONSUMER_ID, AJANTA_ID, CHICKEN_VINDALOO_LINE_ITEMS);
      orderRepository.save(order);
      return order.getId();
    });

    transactionTemplate.execute((ts) -> {
      Order order = orderRepository.findById(orderId).get();

      assertEquals(OrderState.APPROVAL_PENDING, order.getState());
      assertEquals(AJANTA_ID, order.getRestaurantId());
      assertEquals(CONSUMER_ID, order.getConsumerId().longValue());
      assertEquals(CHICKEN_VINDALOO_LINE_ITEMS, order.getLineItems());
      return null;
    });

  }

}

测试方法执行两个事务。第一个选项将保存数据库中新创建的 ID。第二个事务加载并验证其字段是否已正确初始化。shouldSaveAndLoadOrder()OrderOrder

The shouldSaveAndLoadOrder() test method executes two transactions. The first saves a newly created Order in the database. The second transaction loads the Order and verifies that its fields are properly initialized.

您需要解决的一个问题是,如何预置用于持久性集成测试的数据库。一个有效的 在测试期间运行数据库实例的解决方案是使用 Docker。Section 10.2 描述了如何在组件测试期间使用 Docker Compose Gradle 插件自动运行服务。您可以使用 例如,在持久性集成测试期间运行 MySQL 的类似方法。

One problem you need to solve is how to provision the database that’s used in persistence integration tests. An effective solution to run an instance of the database during testing is to use Docker. Section 10.2 describes how to use the Docker Compose Gradle plugin to automatically run services during component testing. You can use a similar approach to run MySQL, for example, during persistence integration testing.

数据库只是服务与之交互的外部服务之一。现在让我们看看如何编写集成测试 用于应用程序服务之间的服务间通信,从 REST 开始。

The database is only one of the external services a service interacts with. Let’s now look at how to write integration tests for interservice communication between application services, starting with REST.

10.1.2. 集成测试基于 REST 的请求 / 响应样式交互

10.1.2. Integration testing REST-based request/response style interactions

REST 是一种广泛使用的跨服务通信机制。REST 客户端和 REST 服务必须就 REST API 达成一致,该 API 包括 REST 端点以及请求和响应正文的结构。客户端必须向 正确的终端节点,并且服务必须发回客户端期望的响应。

REST is a widely used interservice communication mechanism. The REST client and REST service must agree on the REST API, which includes the REST endpoints and the structure of the request and response bodies. The client must send an HTTP request to the correct endpoint, and the service must send back the response that the client expects.

例如,第 8 章介绍了 FTGO 应用程序如何对众多服务进行 REST API 调用,包括 、 和 。的端点是由 .为了有信心并在不使用端到端测试的情况下进行通信,我们需要编写集成测试。API GatewayConsumerServiceOrder ServiceDelivery ServiceOrderServiceGET /orders/{orderId}API GatewayAPI GatewayOrder Service

For example, chapter 8 describes how the FTGO application’s API Gateway makes REST API calls to numerous services, including ConsumerService, Order Service, and Delivery Service. The OrderService’s GET /orders/{orderId} endpoint is one of the endpoints invoked by the API Gateway. In order to be confident that API Gateway and Order Service can communicate without using an end-to-end test, we need to write integration tests.

如上一章所述,一个好的集成测试策略是使用消费者驱动的 Contract 测试。交互 between 和 可以使用一组基于 HTTP 的协定进行描述。每个协定都由一个 HTTP 请求和一个 HTTP 回复组成。合同 用于 test 和 。API GatewayGET /orders/{orderId}API GatewayOrder Service

As stated in the preceding chapter, a good integration testing strategy is to use consumer-driven contract tests. The interaction between API Gateway and GET /orders/{orderId} can be described using a set of HTTP-based contracts. Each contract consists of an HTTP request and an HTTP reply. The contracts are used to test API Gateway and Order Service.

图 10.3 显示了如何使用 Spring Cloud Contract 测试基于 REST 的交互。使用者端集成测试使用 Contract 来配置一个 HTTP 存根服务器,该服务器模拟 .合同的请求指定来自 API 网关的 HTTP 请求,合同的响应指定响应 存根发送回 API 网关。Spring Cloud Contract 使用 Contract 对提供者端集成测试进行代码生成,这些测试使用 Spring Mock MVC 或 Rest Assured Mock MVC 测试控制器。合约的请求指定了 要向控制器发出的 HTTP 请求,合约的响应指定了控制器的预期响应。API GatewayOrder ServiceOrder Service

Figure 10.3 shows how to use Spring Cloud Contract to test REST-based interactions. The consumer-side API Gateway integration tests use the contracts to configure an HTTP stub server that simulates the behavior of Order Service. A contract’s request specifies an HTTP request from the API gateway, and the contract’s response specifies the response that the stub sends back to the API gateway. Spring Cloud Contract uses the contracts to code-generate the provider-side Order Service integration tests, which test the controllers using Spring Mock MVC or Rest Assured Mock MVC. The contract’s request specifies the HTTP request to make to the controller, and the contract’s response specifies the controller’s expected response.

图 10.3.这些 Contract 用于验证 和 之间基于 REST 的通信两端的适配器类是否符合 Contract。使用者端测试验证调用是否正确。提供程序端测试验证是否正确实施了 REST API 端点。API GatewayOrder ServiceOrderServiceProxyOrder ServiceOrderController

使用者端 invokes ,它已配置为向 WireMock 发出 HTTP 请求。WireMock 是一种用于有效模拟 HTTP 服务器的工具 — 在 此测试模拟 Spring Cloud Contract,管理 WireMock 并将其配置为响应 Contract 定义的 HTTP 请求。OrderServiceProxyTestOrderServiceProxyOrder Service.

The consumer-side OrderServiceProxyTest invokes OrderServiceProxy, which has been configured to make HTTP requests to WireMock. WireMock is a tool for efficiently mocking HTTP servers—in this test it simulates Order Service. Spring Cloud Contract manages WireMock and configures it to respond to the HTTP requests defined by the contracts.

在提供者端,Spring Cloud Contract 生成一个名为 的测试类,它使用 Rest Assured Mock MVC 来测试 的控制器。测试类(如必须扩展手写基类)。在此示例中,基类实例化注入的 mock 依赖项和调用以配置 Spring MVC。HttpTestOrder ServiceHttpTestBaseHttpOrderControllerRestAssuredMockMvc.standaloneSetup()

On the provider side, Spring Cloud Contract generates a test class called HttpTest, which uses Rest Assured Mock MVC to test Order Service’s controllers. Test classes such as HttpTest must extend a handwritten base class. In this example, the base class BaseHttp instantiates OrderController injected with mock dependencies and calls RestAssuredMockMvc.standaloneSetup() to configure Spring MVC.

让我们仔细看看它是如何工作的,从一个示例 Contract 开始。

Let’s take a closer look at how this works, starting with an example contract.

REST API 的示例协定

REST契约(如清单10.2中所示的契约)指定了由REST客户端发送的HTTP请求和客户端希望返回的HTTP响应 从 REST 服务器。合同的请求指定 HTTP 方法、路径和可选标头。合同的响应 指定 HTTP 状态代码、可选标头,并在适当时指定预期的正文。

A REST contract, such as the one shown in listing 10.2, specifies an HTTP request, which is sent by the REST client, and the HTTP response, which the client expects to get back from the REST server. A contract’s request specifies the HTTP method, the path, and optional headers. A contract’s response specifies the HTTP status code, optional headers, and, when appropriate, the expected body.

清单 10.2.描述基于 HTTP 的请求/响应样式交互的协定
org.springframework.cloud.contract.spec.Contract.make {
    request {
        method 'GET'
        url '/orders/1223232'
    }
    response {
        status 200
        headers {
            header('Content-Type': 'application/json;charset=UTF-8')
        }
        body('''{"orderId" : "1223232", "state" : "APPROVAL_PENDING"}''')
    }
}
org.springframework.cloud.contract.spec.Contract.make {
    request {
        method 'GET'
        url '/orders/1223232'
    }
    response {
        status 200
        headers {
            header('Content-Type': 'application/json;charset=UTF-8')
        }
        body('''{"orderId" : "1223232", "state" : "APPROVAL_PENDING"}''')
    }
}

此特定协定描述通过 从 检索 .现在让我们看看如何使用此协定编写集成测试,从 .API GatewayOrderOrder ServiceOrder Service

This particular contract describes a successful attempt by API Gateway to retrieve an Order from Order Service. Let’s now look at how to use this contract to write integration tests, starting with the tests for Order Service.

面向 Order Service 的消费者驱动的合约集成测试

消费者驱动的合同集成测试其 API 是否满足客户的期望。清单 10.3 显示了 ,这是由 Spring Cloud Contract 生成的测试类代码的基类。它负责设置阶段 的测试。它创建注入了 mock 依赖项的控制器,并将这些 mock 配置为返回导致 控制器生成预期的响应。Order ServiceHttpBase

The consumer-driven contract integration tests for Order Service verify that its API meets its clients’ expectations. Listing 10.3 shows HttpBase, which is the base class for the test class code-generated by Spring Cloud Contract. It’s responsible for the setup phase of the test. It creates the controllers injected with mock dependencies and configures those mocks to return values that cause the controller to generate the expected response.

清单 10.3.由 Spring Cloud Contract 生成的测试代码的抽象基类
public abstract class HttpBase {

  private StandaloneMockMvcBuilder controllers(Object... controllers) {
    ...
    return MockMvcBuilders.standaloneSetup(controllers)
                     .setMessageConverters(...);
  }

  @Before
  public void setup() {
    OrderService orderService = mock(OrderService.class);                    1
     OrderRepository orderRepository = mock(OrderRepository.class);
    OrderController orderController =
              new OrderController(orderService, orderRepository);

    when(orderRepository.findById(1223232L))                                 2
            .thenReturn(Optional.of(OrderDetailsMother.CHICKEN_VINDALOO_ORDER));
    ...
    RestAssuredMockMvc.standaloneSetup(controllers(orderController));        3

  }
}
public abstract class HttpBase {

  private StandaloneMockMvcBuilder controllers(Object... controllers) {
    ...
    return MockMvcBuilders.standaloneSetup(controllers)
                     .setMessageConverters(...);
  }

  @Before
  public void setup() {
    OrderService orderService = mock(OrderService.class);                    1
     OrderRepository orderRepository = mock(OrderRepository.class);
    OrderController orderController =
              new OrderController(orderService, orderRepository);

    when(orderRepository.findById(1223232L))                                 2
            .thenReturn(Optional.of(OrderDetailsMother.CHICKEN_VINDALOO_ORDER));
    ...
    RestAssuredMockMvc.standaloneSetup(controllers(orderController));        3

  }
}

  • 1 创建注入了 mock 的 OrderRepository。
  • 1 Create OrderRepository injected with mocks.
  • 2 配置 OrderResponse 以在调用使用合约中指定的 orderId 调用 findById() 时返回 Order。
  • 2 Configure OrderResponse to return an Order when findById() is invoked with the orderId specified in the contract.
  • 3 使用 OrderController 配置 Spring MVC。
  • 3 Configure Spring MVC with OrderController.

传递给 mock 的 method 的参数与清单 10.3 中所示的 Contract 中指定的参数匹配。此测试验证是否具有符合其客户端期望的终端节点。1223232LOrderRepositoryfindById()orderIdOrder ServiceGET /orders/{orderId}

The argument 1223232L that’s passed to the mock OrderRepository’s findById() method matches the orderId specified in the contract shown in listing 10.3. This test verifies that Order Service has a GET /orders/{orderId} endpoint that matches its client’s expectations.

我们来看一下对应的 client test。

Let’s take a look at the corresponding client test.

API Gateway 的 OrderServiceProxy 的消费者端集成测试

API Gateway的调用端点。清单 10.4 显示了测试类,它验证它是否符合 Contracts。这个类用 Spring Cloud Contract 提供的 注释。它告诉 Spring Cloud Contract 在随机端口上运行 WireMock 服务器并配置 它使用指定的合约。 配置为向 WireMock 端口发出请求。OrderServiceProxyGET /orders/{orderId}OrderServiceProxyIntegrationTest@AutoConfigureStubRunnerOrderServiceProxyIntegrationTestOrderServiceProxy

API Gateway’s OrderServiceProxy invokes the GET /orders/{orderId} endpoint. Listing 10.4 shows the OrderServiceProxyIntegrationTest test class, which verifies that it conforms to the contracts. This class is annotated with @AutoConfigureStubRunner, provided by Spring Cloud Contract. It tells Spring Cloud Contract to run the WireMock server on a random port and configure it using the specified contracts. OrderServiceProxyIntegrationTest configures OrderServiceProxy to make requests to the WireMock port.

清单 10.4.的使用者端集成测试API GatewayOrderServiceProxy
@RunWith(SpringRunner.class)
@SpringBootTest(classes=TestConfiguration.class,
        webEnvironment= SpringBootTest.WebEnvironment.NONE)
@AutoConfigureStubRunner(ids =                                            1
         {"net.chrisrichardson.ftgo.contracts:ftgo-order-service-contracts"},
        workOffline = false)
@DirtiesContext
public class OrderServiceProxyIntegrationTest {

  @Value("${stubrunner.runningstubs.ftgo-order-service-contracts.port}")  2
  private int port;
  private OrderDestinations orderDestinations;
  private OrderServiceProxy orderService;

  @Before
  public void setUp() throws Exception {
    orderDestinations = new OrderDestinations();
    String orderServiceUrl = "http://localhost:" + port;
    orderDestinations.setOrderServiceUrl(orderServiceUrl);
    orderService = new OrderServiceProxy(orderDestinations,               3
                                          WebClient.create());
  }

  @Test
  public void shouldVerifyExistingCustomer() {
    OrderInfo result = orderService.findOrderById("1223232").block();
    assertEquals("1223232", result.getOrderId());
    assertEquals("APPROVAL_PENDING", result.getState());
  }

  @Test(expected = OrderNotFoundException.class)
  public void shouldFailToFindMissingOrder() {
    orderService.findOrderById("555").block();
  }

}
@RunWith(SpringRunner.class)
@SpringBootTest(classes=TestConfiguration.class,
        webEnvironment= SpringBootTest.WebEnvironment.NONE)
@AutoConfigureStubRunner(ids =                                            1
         {"net.chrisrichardson.ftgo.contracts:ftgo-order-service-contracts"},
        workOffline = false)
@DirtiesContext
public class OrderServiceProxyIntegrationTest {

  @Value("${stubrunner.runningstubs.ftgo-order-service-contracts.port}")  2
  private int port;
  private OrderDestinations orderDestinations;
  private OrderServiceProxy orderService;

  @Before
  public void setUp() throws Exception {
    orderDestinations = new OrderDestinations();
    String orderServiceUrl = "http://localhost:" + port;
    orderDestinations.setOrderServiceUrl(orderServiceUrl);
    orderService = new OrderServiceProxy(orderDestinations,               3
                                          WebClient.create());
  }

  @Test
  public void shouldVerifyExistingCustomer() {
    OrderInfo result = orderService.findOrderById("1223232").block();
    assertEquals("1223232", result.getOrderId());
    assertEquals("APPROVAL_PENDING", result.getState());
  }

  @Test(expected = OrderNotFoundException.class)
  public void shouldFailToFindMissingOrder() {
    orderService.findOrderById("555").block();
  }

}

  • 1 告诉 Spring Cloud Contract 使用 Order Service 的 Contract 配置 WireMock。
  • 1 Tell Spring Cloud Contract to configure WireMock with Order Service’s contracts.
  • 2 获取运行 WireMock 的随机分配的端口。
  • 2 Obtain the randomly assigned port that WireMock is running on.
  • 3 创建一个 OrderServiceProxy,配置为向 WireMock 发出请求。
  • 3 Create an OrderServiceProxy configured to make requests to WireMock.

每个测试方法都会调用并验证它是否返回正确的值或引发预期的异常。测试方法验证返回的值是否等于协定响应中指定的值。尝试检索不存在的 AND 验证会引发 .使用相同的协定测试 REST 客户端和 REST 服务可确保它们在 API 上达成一致。OrderServiceProxyshouldVerifyExistingCustomer()findOrderById()shouldFailToFindMissingOrder()OrderOrderServiceProxyOrderNotFoundException

Each test method invokes OrderServiceProxy and verifies that either it returns the correct values or throws the expected exception. The shouldVerifyExistingCustomer() test method verifies that findOrderById() returns values equal to those specified in the contract’s response. The shouldFailToFindMissingOrder() attempts to retrieve a nonexistent Order and verifies that OrderServiceProxy throws an OrderNotFoundException. Testing both the REST client and the REST service using the same contracts ensures that they agree on the API.

现在让我们看看如何对使用消息传递进行交互的服务执行相同类型的测试。

Let’s now look at how to do the same kind of testing for services that interact using messaging.

10.1.3. 集成测试发布 / 订阅风格的交互

10.1.3. Integration testing publish/subscribe-style interactions

服务通常会发布由一个或多个其他服务使用的域事件。集成测试必须验证 发布者及其使用者就消息通道和域事件的结构达成一致。,例如,每当创建或更新聚合时都会发布事件。 是这些事件的使用者之一。因此,我们必须编写测试来验证这些服务是否可以交互。Order ServiceOrder*OrderOrder History Service

Services often publish domain events that are consumed by one or more other services. Integration testing must verify that the publisher and its consumers agree on the message channel and the structure of the domain events. Order Service, for example, publishes Order* events whenever it creates or updates an Order aggregate. Order History Service is one of the consumers of those events. We must, therefore, write tests that verify that these services can interact.

图 10.4 显示了集成测试发布/订阅交互的方法。它与用于测试的方法非常相似 REST 交互。和以前一样,交互由一组 Contract 定义。不同的是,每个 Contract 都指定了 域事件。

Figure 10.4 shows the approach to integration testing publish/subscribe interactions. Its quite similar to the approach used for testing REST interactions. As before, the interactions are defined by a set of contracts. What’s different is that each contract specifies a domain event.

图 10.4.这些 Contract 用于测试发布/订阅交互的两端。提供程序端测试验证是否发布向协定确认的事件。使用者端测试验证是否使用协定中的示例事件。OrderDomainEventPublisherOrderHistoryEventHandlers

每个使用者端测试都会发布协定指定的事件,并验证是否正确调用了其模拟依赖项。OrderHistoryEventHandlers

Each consumer-side test publishes the event specified by the contract and verifies that OrderHistoryEventHandlers invokes its mocked dependencies correctly.

在提供者方面, Spring Cloud Contract 代码生成 extend 的测试类,这是一个手写的抽象超类。每个测试方法都会调用一个由 定义的钩子方法,该方法预期会触发服务发布事件。在此示例中,每个钩子方法都调用 ,它负责发布聚合事件。然后,测试方法验证是否发布了预期事件。让我们看看这些测试如何运作的详细信息,从 Contract 开始。MessagingBaseMessagingBaseOrderDomainEventPublisherOrderOrderDomainEventPublisher

On the provider side, Spring Cloud Contract code-generates test classes that extend MessagingBase, which is a hand-written abstract superclass. Each test method invokes a hook method defined by MessagingBase, which is expected to trigger the publication of an event by the service. In this example, each hook method invokes OrderDomainEventPublisher, which is responsible for publishing Order aggregate events. The test method then verifies that OrderDomainEventPublisher published the expected event. Let’s look at the details of how these tests work, starting with the contract.

用于发布 OrderCreated 事件的协定

清单 10.5 显示了一个事件的合约。它指定事件的通道,以及预期的正文和消息标头。OrderCreated

Listing 10.5 shows the contract for an OrderCreated event. It specifies the event’s channel, along with the expected body and message headers.

清单 10.5.发布/订阅交互样式的协定
package contracts;

org.springframework.cloud.contract.spec.Contract.make {
    label 'orderCreatedEvent'                                         1
    input {
        triggeredBy('orderCreated()')                                 2
    }

    outputMessage {                                                   3
        sentTo('net.chrisrichardson.ftgo.orderservice.domain.Order')
        body('''{"orderDetails":{"lineItems":[{"quantity":5,"menuItemId":"1",
                 "name":"Chicken Vindaloo","price":"12.34","total":"61.70"}],
                 "orderTotal":"61.70","restaurantId":1,
        "consumerId":1511300065921},"orderState":"APPROVAL_PENDING"}''')
        headers {
            header('event-aggregate-type',
                        'net.chrisrichardson.ftgo.orderservice.domain.Order')
            header('event-aggregate-id', '1')
        }
    }
}
package contracts;

org.springframework.cloud.contract.spec.Contract.make {
    label 'orderCreatedEvent'                                         1
    input {
        triggeredBy('orderCreated()')                                 2
    }

    outputMessage {                                                   3
        sentTo('net.chrisrichardson.ftgo.orderservice.domain.Order')
        body('''{"orderDetails":{"lineItems":[{"quantity":5,"menuItemId":"1",
                 "name":"Chicken Vindaloo","price":"12.34","total":"61.70"}],
                 "orderTotal":"61.70","restaurantId":1,
        "consumerId":1511300065921},"orderState":"APPROVAL_PENDING"}''')
        headers {
            header('event-aggregate-type',
                        'net.chrisrichardson.ftgo.orderservice.domain.Order')
            header('event-aggregate-id', '1')
        }
    }
}

  • 1 由使用者测试用于触发要发布的事件
  • 1 Used by the consumer test to trigger the event to be published
  • 2 由代码生成的提供程序测试调用
  • 2 Invoked by the code-generated provider test
  • 3 一个 OrderCreated 域事件
  • 3 An OrderCreated domain event

该合同还有另外两个重要要素:

The contract also has two other important elements:

  • label— 被使用者测试用来触发 Spring Contact 发布事件
  • label—is used by a consumer test to trigger publication of the event by Spring Contact
  • triggeredBy- 生成的测试方法调用的超类方法的名称,用于触发事件的发布
  • triggeredBy—the name of the superclass method invoked by the generated test method to trigger the publishing of the event

让我们看看如何使用协定,从 的提供者端测试 开始。OrderService

Let’s look at how the contract is used, starting with the provider-side test for OrderService.

面向 Order Service 的消费者驱动的合约测试

的提供者端测试是另一个消费者驱动的合约集成测试。它验证 负责发布聚合域事件的 ,是否发布符合其客户期望的事件。清单 10.6 显示了 ,这是由 Spring Cloud Contract 生成的测试类代码的基类。它负责将类配置为使用内存中的消息收发存根。它还定义了方法,例如 ,生成的测试会调用这些方法以触发事件的发布。Order ServiceOrderDomainEventPublisherOrderMessagingBaseOrderDomainEventPublisherorderCreated()

The provider-side test for Order Service is another consumer-driven contract integration test. It verifies that OrderDomainEventPublisher, which is responsible for publishing Order aggregate domain events, publishes events that match its clients’ expectations. Listing 10.6 shows MessagingBase, which is the base class for the test classes code-generated by Spring Cloud Contract. It’s responsible for configuring the OrderDomainEventPublisher class to use in-memory messaging stubs. It also defines the methods, such as orderCreated(), which are invoked by the generated tests to trigger the publishing of the event.

清单 10.6.Spring Cloud Contract 提供者端测试的抽象基类
@RunWith(SpringRunner.class)
@SpringBootTest(classes = MessagingBase.TestConfiguration.class,
                webEnvironment = SpringBootTest.WebEnvironment.NONE)
@AutoConfigureMessageVerifier
public abstract class MessagingBase {

  @Configuration
  @EnableAutoConfiguration
  @Import({EventuateContractVerifierConfiguration.class,
           TramEventsPublisherConfiguration.class,
           TramInMemoryConfiguration.class})
  public static class TestConfiguration {

    @Bean
    public OrderDomainEventPublisher
            OrderDomainEventPublisher(DomainEventPublisher eventPublisher) {
      return new OrderDomainEventPublisher(eventPublisher);
    }
  }


  @Autowired
  private OrderDomainEventPublisher OrderDomainEventPublisher;

  protected void orderCreated() {                                   1
     OrderDomainEventPublisher.publish(CHICKEN_VINDALOO_ORDER,
          singletonList(new OrderCreatedEvent(CHICKEN_VINDALOO_ORDER_DETAILS)));
  }

}
@RunWith(SpringRunner.class)
@SpringBootTest(classes = MessagingBase.TestConfiguration.class,
                webEnvironment = SpringBootTest.WebEnvironment.NONE)
@AutoConfigureMessageVerifier
public abstract class MessagingBase {

  @Configuration
  @EnableAutoConfiguration
  @Import({EventuateContractVerifierConfiguration.class,
           TramEventsPublisherConfiguration.class,
           TramInMemoryConfiguration.class})
  public static class TestConfiguration {

    @Bean
    public OrderDomainEventPublisher
            OrderDomainEventPublisher(DomainEventPublisher eventPublisher) {
      return new OrderDomainEventPublisher(eventPublisher);
    }
  }


  @Autowired
  private OrderDomainEventPublisher OrderDomainEventPublisher;

  protected void orderCreated() {                                   1
     OrderDomainEventPublisher.publish(CHICKEN_VINDALOO_ORDER,
          singletonList(new OrderCreatedEvent(CHICKEN_VINDALOO_ORDER_DETAILS)));
  }

}

  • 1 orderCreated() 由代码生成的测试子类调用来发布事件。
  • 1 orderCreated() is invoked by a code-generated test subclass to publish the event.

此测试类使用内存中的消息收根进行配置。 由从前面 10.5 中所示的 Contract 生成的测试方法调用。它调用 以发布事件。测试方法尝试接收此事件,然后验证它是否与协定中指定的事件匹配。 现在让我们看看相应的消费者端测试。OrderDomainEventPublisherorderCreated()OrderDomainEventPublisherOrderCreated

This test class configures OrderDomainEventPublisher with in-memory messaging stubs. orderCreated() is invoked by the test method generated from the contract shown earlier in listing 10.5. It invokes OrderDomainEventPublisher to publish an OrderCreated event. The test method attempts to receive this event and then verifies that it matches the event specified in the contract. Let’s now look at the corresponding consumer-side tests.

Order History Service 的使用者端协定测试

Order History Service使用 发布的事件。正如我在第 7 章中所描述的,处理这些事件的 adapter 类是 class。调用其事件处理程序以更新 CQRS 视图。清单 10.7 显示了消费者端的集成测试。它创建一个带有 mock 的 injected .每个测试方法首先调用 Spring Cloud 来发布 Contract 中定义的事件,然后验证调用是否正确。Order ServiceOrderHistoryEventHandlersOrderHistoryDaoOrderHistoryEventHandlersOrderHistoryDaoOrderHistoryEventHandlersOrderHistoryDao

Order History Service consumes events published by Order Service. As I described in chapter 7, the adapter class that handles these events is the OrderHistoryEventHandlers class. Its event handlers invoke OrderHistoryDao to update the CQRS view. Listing 10.7 shows the consumer-side integration test. It creates an OrderHistoryEventHandlers injected with a mock OrderHistoryDao. Each test method first invokes Spring Cloud to publish the event defined in the contract and then verifies that OrderHistoryEventHandlers invokes OrderHistoryDao correctly.

清单 10.7.类的使用者端集成测试OrderHistoryEventHandlers
@RunWith(SpringRunner.class)
@SpringBootTest(classes= OrderHistoryEventHandlersTest.TestConfiguration.class,
        webEnvironment= SpringBootTest.WebEnvironment.NONE)
@AutoConfigureStubRunner(ids =
        {"net.chrisrichardson.ftgo.contracts:ftgo-order-service-contracts"},
        workOffline = false)
@DirtiesContext
public class OrderHistoryEventHandlersTest {

  @Configuration
  @EnableAutoConfiguration
  @Import({OrderHistoryServiceMessagingConfiguration.class,
          TramCommandProducerConfiguration.class,
          TramInMemoryConfiguration.class,
          EventuateContractVerifierConfiguration.class})
  public static class TestConfiguration {

    @Bean
    public OrderHistoryDao orderHistoryDao() {
      return mock(OrderHistoryDao.class);                                    1
     }
  }

  @Test
  public void shouldHandleOrderCreatedEvent() throws ... {
    stubFinder.trigger("orderCreatedEvent");                                 2
     eventually(() -> {                                                      3
       verify(orderHistoryDao).addOrder(any(Order.class), any(Optional.class));
    });
  }
@RunWith(SpringRunner.class)
@SpringBootTest(classes= OrderHistoryEventHandlersTest.TestConfiguration.class,
        webEnvironment= SpringBootTest.WebEnvironment.NONE)
@AutoConfigureStubRunner(ids =
        {"net.chrisrichardson.ftgo.contracts:ftgo-order-service-contracts"},
        workOffline = false)
@DirtiesContext
public class OrderHistoryEventHandlersTest {

  @Configuration
  @EnableAutoConfiguration
  @Import({OrderHistoryServiceMessagingConfiguration.class,
          TramCommandProducerConfiguration.class,
          TramInMemoryConfiguration.class,
          EventuateContractVerifierConfiguration.class})
  public static class TestConfiguration {

    @Bean
    public OrderHistoryDao orderHistoryDao() {
      return mock(OrderHistoryDao.class);                                    1
     }
  }

  @Test
  public void shouldHandleOrderCreatedEvent() throws ... {
    stubFinder.trigger("orderCreatedEvent");                                 2
     eventually(() -> {                                                      3
       verify(orderHistoryDao).addOrder(any(Order.class), any(Optional.class));
    });
  }

  • 1 创建一个 mock OrderHistoryDao 以注入到 OrderHistoryEventHandlers 中。
  • 1 Create a mock OrderHistoryDao to inject into OrderHistoryEventHandlers.
  • 2 触发 orderCreatedEvent 存根,该存根会发出 OrderCreated 事件。
  • 2 Trigger the orderCreatedEvent stub, which emits an OrderCreated event.
  • 3 验证 OrderHistoryEventHandlers 是否调用了 orderHistoryDao.addOrder()。
  • 3 Verify that OrderHistoryEventHandlers invoked orderHistoryDao.addOrder().

测试方法告诉 Spring Cloud Contract 发布事件。然后,它验证调用了 .使用相同的协定测试域事件的发布者和使用者,确保它们在 API 上达成一致。让我们 现在看看如何执行使用异步请求/响应进行交互的集成测试服务。shouldHandleOrderCreatedEvent()OrderCreatedOrderHistoryEventHandlersorderHistoryDao.addOrder()

The shouldHandleOrderCreatedEvent() test method tells Spring Cloud Contract to publish the OrderCreated event. It then verifies that OrderHistoryEventHandlers invoked orderHistoryDao.addOrder(). Testing both the domain event’s publisher and consumer using the same contracts ensures that they agree on the API. Let’s now look at how to do integration test services that interact using asynchronous request/response.

10.1.4. 异步请求 / 响应交互的集成合约测试

10.1.4. Integration contract tests for asynchronous request/response interactions

发布/订阅并不是唯一一种基于消息传递的交互方式。服务还使用异步请求/响应进行交互。 例如,在第 4 章中,我们看到 实现了 saga,将命令消息发送到各种服务,例如 ,并处理回复消息。Order ServiceKitchen Service

Publish/subscribe isn’t the only kind of messaging-based interaction style. Services also interact using asynchronous request/response. For example, in chapter 4 we saw that Order Service implements sagas that send command messages to various services, such as Kitchen Service, and processes the reply messages.

异步请求/响应交互中的两方是请求者,即发送命令的服务。 以及 Replier ,这是处理命令并发回回复的服务。他们必须就命令的名称达成一致 消息通道以及命令和回复消息的结构。让我们看看如何为 asynchronous 编写集成测试 请求/响应交互。

The two parties in an asynchronous request/response interaction are the requestor, which is the service that sends the command, and the replier, which is the service that processes the command and sends back a reply. They must agree on the name of command message channel and the structure of the command and reply messages. Let’s look at how to write integration tests for asynchronous request/response interactions.

图 10.5 显示了如何测试 和 之间的交互。集成测试异步请求/响应交互的方法与用于 测试 REST 交互。服务之间的交互由一组协定定义。不同的是 协定指定输入消息和输出消息,而不是 HTTP 请求和回复。Order ServiceKitchen Service

Figure 10.5 shows how to test the interaction between Order Service and Kitchen Service. The approach to integration testing asynchronous request/response interactions is quite similar to the approach used for testing REST interactions. The interactions between the services are defined by a set of contracts. What’s different is that a contract specifies an input message and an output message instead of an HTTP request and reply.

图 10.5.这些协定用于测试实现异步请求/响应交互每一端的适配器类。 提供程序端测试验证是否处理命令并发回回复。使用者端测试验证发送符合 Contract 的命令,以及它是否处理来自 Contract 的示例回复。KitchenServiceCommandHandlerKitchenServiceProxy

使用者端测试验证命令消息代理类是否发送结构正确且正确的命令消息 处理回复消息。在此示例中, tests .它使用 Spring Cloud Contract 来配置消息传递存根,以验证命令消息是否与 Contract 的输入匹配 消息,并使用相应的输出消息进行回复。KitchenServiceProxyTestKitchenServiceProxy

The consumer-side test verifies that the command message proxy class sends correctly structured command messages and correctly processes reply messages. In this example, KitchenServiceProxyTest tests KitchenServiceProxy. It uses Spring Cloud Contract to configure messaging stubs that verify that the command message matches a contract’s input message and replies with the corresponding output message.

提供者端测试是由 Spring Cloud Contract 生成的代码。每种测试方法对应于一个合同。它发送 合约的输入消息作为命令消息,并验证回复消息是否与合约的输出消息匹配。 让我们看看细节,从合同开始。

The provider-side tests are code-generated by Spring Cloud Contract. Each test method corresponds to a contract. It sends the contract’s input message as a command message and verifies that the reply message matches the contract’s output message. Let’s look at the details, starting with the contract.

异步请求/响应协定示例

清单 10.8 显示了一个交互的合约。它由输入消息和输出消息组成。两条消息都指定一条消息 channel、message body 和 message headers。命名约定是从提供商的角度来看的。input message's 元素指定从中读取消息的通道。同样,output message 的 element 指定应将回复发送到的通道。messageFromsentTo

Listing 10.8 shows the contract for one interaction. It consists of an input message and an output message. Both messages specify a message channel, message body, and message headers. The naming convention is from the provider’s perspective. The input message’s messageFrom element specifies the channel that the message is read from. Similarly, the output message’s sentTo element specifies the channel that the reply should be sent to.

清单 10.8.Contract 描述如何异步调用Order ServiceKitchen Service
package contracts;

org.springframework.cloud.contract.spec.Contract.make {
    label 'createTicket'
    input {                                                                 1
        messageFrom('kitchenService')
        messageBody('''{"orderId":1,"restaurantId":1,"ticketDetails":{...}}''')
        messageHeaders {
            header('command_type','net.chrisrichardson...CreateTicket')
            header('command_saga_type','net.chrisrichardson...CreateOrderSaga')
            header('command_saga_id',$(consumer(regex('[0-9a-f]{16}-[0-9a-f]
               {16}'))))
            header('command_reply_to','net.chrisrichardson...CreateOrderSaga-Reply')
        }
    }
    outputMessage {                                                         2
        sentTo('net.chrisrichardson...CreateOrderSaga-reply')
        body([
                ticketId: 1
        ])
        headers {
            header('reply_type', 'net.chrisrichardson...CreateTicketReply')
            header('reply_outcome-type', 'SUCCESS')
        }
    }
}
package contracts;

org.springframework.cloud.contract.spec.Contract.make {
    label 'createTicket'
    input {                                                                 1
        messageFrom('kitchenService')
        messageBody('''{"orderId":1,"restaurantId":1,"ticketDetails":{...}}''')
        messageHeaders {
            header('command_type','net.chrisrichardson...CreateTicket')
            header('command_saga_type','net.chrisrichardson...CreateOrderSaga')
            header('command_saga_id',$(consumer(regex('[0-9a-f]{16}-[0-9a-f]
               {16}'))))
            header('command_reply_to','net.chrisrichardson...CreateOrderSaga-Reply')
        }
    }
    outputMessage {                                                         2
        sentTo('net.chrisrichardson...CreateOrderSaga-reply')
        body([
                ticketId: 1
        ])
        headers {
            header('reply_type', 'net.chrisrichardson...CreateTicketReply')
            header('reply_outcome-type', 'SUCCESS')
        }
    }
}

  • 1 Order Service 发送到 kitchenService 通道的命令消息
  • 1 The command message sent by Order Service to the kitchenService channel
  • 2 Kitchen Service 发送的回复消息
  • 2 The reply message sent by Kitchen Service

在这个示例合约中,输入消息是发送到通道的命令。输出消息是发送到 的回复通道的成功回复。让我们看看如何在测试中使用此协定,从 的使用者端测试开始。CreateTicketkitchenServiceCreateOrderSagaOrder Service

In this example contract, the input message is a CreateTicket command that’s sent to the kitchenService channel. The output message is a successful reply that’s sent to the CreateOrderSaga’s reply channel. Let’s look at how to use this contract in tests, starting with the consumer-side tests for Order Service.

用于异步请求/响应交互的使用者端合同集成测试

为异步请求/响应交互编写使用者端集成测试的策略类似于测试 REST 客户端。该测试将调用服务的消息代理并验证其行为的两个方面。首先,它验证 消息传递代理发送符合协定的命令消息。其次,它验证代理是否正确 处理回复消息。

The strategy for writing a consumer-side integration test for an asynchronous request/response interaction is similar to testing a REST client. The test invokes the service’s messaging proxy and verifies two aspects of its behavior. First, it verifies that the messaging proxy sends a command message that conforms to the contract. Second, it verifies that the proxy properly handles the reply message.

清单 10.9 显示了 的消费者端集成测试 ,它是用于调用的消息代理。每个测试都使用 using 发送命令消息,并验证它是否返回预期结果。它使用 Spring Cloud Contract 来配置消息传递存根,以查找其 Importing 消息与命令消息匹配的 Contract,并将其输出消息作为回复发送。测试 使用内存中消息传递以实现简单性和速度。KitchenServiceProxyOrder ServiceKitchen ServiceKitchenServiceProxyKitchen Service

Listing 10.9 shows the consumer-side integration test for KitchenServiceProxy, which is the messaging proxy used by Order Service to invoke Kitchen Service. Each test sends a command message using KitchenServiceProxy and verifies that it returns the expected result. It uses Spring Cloud Contract to configure messaging stubs for Kitchen Service that find the contract whose input message matches the command message and sends its output message as the reply. The tests use in-memory messaging for simplicity and speed.

清单 10.9.消费者端合约集成测试Order Service
@RunWith(SpringRunner.class)
@SpringBootTest(classes=
     KitchenServiceProxyIntegrationTest.TestConfiguration.class,
        webEnvironment= SpringBootTest.WebEnvironment.NONE)
@AutoConfigureStubRunner(ids =                                               1
         {"net.chrisrichardson.ftgo.contracts:ftgo-kitchen-service-contracts"},
        workOffline = false)
@DirtiesContext
public class KitchenServiceProxyIntegrationTest {


  @Configuration
  @EnableAutoConfiguration
  @Import({TramCommandProducerConfiguration.class,
          TramInMemoryConfiguration.class,
            EventuateContractVerifierConfiguration.class})
  public static class TestConfiguration { ... }

  @Autowired
  private SagaMessagingTestHelper sagaMessagingTestHelper;

  @Autowired
  private  KitchenServiceProxy kitchenServiceProxy;

  @Test
  public void shouldSuccessfullyCreateTicket() {
    CreateTicket command = new CreateTicket(AJANTA_ID,
          OrderDetailsMother.ORDER_ID,
      new TicketDetails(Collections.singletonList(
        new TicketLineItem(CHICKEN_VINDALOO_MENU_ITEM_ID,
                           CHICKEN_VINDALOO,
                           CHICKEN_VINDALOO_QUANTITY))));

    String sagaType = CreateOrderSaga.class.getName();

    CreateTicketReply reply =
       sagaMessagingTestHelper                                               2
             .sendAndReceiveCommand(kitchenServiceProxy.create,
                                   command,
                                    CreateTicketReply.class, sagaType);

    assertEquals(new CreateTicketReply(OrderDetailsMother.ORDER_ID), reply); 3

  }

}
@RunWith(SpringRunner.class)
@SpringBootTest(classes=
     KitchenServiceProxyIntegrationTest.TestConfiguration.class,
        webEnvironment= SpringBootTest.WebEnvironment.NONE)
@AutoConfigureStubRunner(ids =                                               1
         {"net.chrisrichardson.ftgo.contracts:ftgo-kitchen-service-contracts"},
        workOffline = false)
@DirtiesContext
public class KitchenServiceProxyIntegrationTest {


  @Configuration
  @EnableAutoConfiguration
  @Import({TramCommandProducerConfiguration.class,
          TramInMemoryConfiguration.class,
            EventuateContractVerifierConfiguration.class})
  public static class TestConfiguration { ... }

  @Autowired
  private SagaMessagingTestHelper sagaMessagingTestHelper;

  @Autowired
  private  KitchenServiceProxy kitchenServiceProxy;

  @Test
  public void shouldSuccessfullyCreateTicket() {
    CreateTicket command = new CreateTicket(AJANTA_ID,
          OrderDetailsMother.ORDER_ID,
      new TicketDetails(Collections.singletonList(
        new TicketLineItem(CHICKEN_VINDALOO_MENU_ITEM_ID,
                           CHICKEN_VINDALOO,
                           CHICKEN_VINDALOO_QUANTITY))));

    String sagaType = CreateOrderSaga.class.getName();

    CreateTicketReply reply =
       sagaMessagingTestHelper                                               2
             .sendAndReceiveCommand(kitchenServiceProxy.create,
                                   command,
                                    CreateTicketReply.class, sagaType);

    assertEquals(new CreateTicketReply(OrderDetailsMother.ORDER_ID), reply); 3

  }

}

  • 1 配置 stub Kitchen Service 以响应消息。
  • 1 Configure the stub Kitchen Service to respond to messages.
  • 2 发送命令并等待回复。
  • 2 Send the command and wait for a reply.
  • 3 验证回复。
  • 3 Verify the reply.

测试方法发送命令消息并验证回复是否包含预期数据。它使用 ,这是一个同步发送和接收消息的测试帮助程序类。shouldSuccessfullyCreateTicket()CreateTicketSagaMessagingTestHelper

The shouldSuccessfullyCreateTicket() test method sends a CreateTicket command message and verifies that the reply contains the expected data. It uses SagaMessagingTestHelper, which is a test helper class that synchronously sends and receives messages.

现在让我们看看如何编写提供程序端集成测试。

Let’s now look at how to write provider-side integration tests.

为异步请求/响应交互编写提供者端、使用者驱动的协定测试

提供程序端集成测试必须通过发送正确的回复来验证提供程序是否处理命令消息。春天 Cloud Contract 会生成测试类,这些类具有每个合约的测试方法。每个测试方法都会发送合约的输入 消息,并验证回复是否与合约的输出消息匹配。

A provider-side integration test must verify that the provider handles a command message by sending the correct reply. Spring Cloud Contract generates test classes that have a test method for each contract. Each test method sends the contract’s input message and verifies that the reply matches the contract’s output message.

的提供程序端集成测试 test .该类通过调用 .下面的清单显示了该类,它是 Spring Cloud Contract 生成的测试的基类。它创建一个带有 mock 的 injected .Kitchen ServiceKitchenServiceCommandHandlerKitchenServiceCommandHandlerKitchenServiceAbstractKitchenServiceConsumerContractTestKitchenServiceCommandHandlerKitchenService

The provider-side integration tests for Kitchen Service test KitchenServiceCommandHandler. The KitchenServiceCommandHandler class handles a message by invoking KitchenService. The following listing shows the AbstractKitchenServiceConsumerContractTest class, which is the base class for the Spring Cloud Contract-generated tests. It creates a KitchenServiceCommandHandler injected with a mock KitchenService.

清单 10.10.提供者端、消费者驱动的合约测试的超类Kitchen Service
@RunWith(SpringRunner.class)
@SpringBootTest(classes =
     AbstractKitchenServiceConsumerContractTest.TestConfiguration.class,
                webEnvironment = SpringBootTest.WebEnvironment.NONE)
@AutoConfigureMessageVerifier
public abstract class AbstractKitchenServiceConsumerContractTest {

  @Configuration
  @Import(RestaurantMessageHandlersConfiguration.class)
  public static class TestConfiguration {
    ...
    @Bean
    public KitchenService kitchenService() {            1
       return mock(KitchenService.class);
    }
  }

  @Autowired
  private KitchenService kitchenService;

  @Before
  public void setup() {
     reset(kitchenService);
     when(kitchenService
           .createTicket(eq(1L), eq(1L),                2
                           any(TicketDetails.class)))
           .thenReturn(new Ticket(1L, 1L,
                        new TicketDetails(Collections.emptyList())));
  }

}
@RunWith(SpringRunner.class)
@SpringBootTest(classes =
     AbstractKitchenServiceConsumerContractTest.TestConfiguration.class,
                webEnvironment = SpringBootTest.WebEnvironment.NONE)
@AutoConfigureMessageVerifier
public abstract class AbstractKitchenServiceConsumerContractTest {

  @Configuration
  @Import(RestaurantMessageHandlersConfiguration.class)
  public static class TestConfiguration {
    ...
    @Bean
    public KitchenService kitchenService() {            1
       return mock(KitchenService.class);
    }
  }

  @Autowired
  private KitchenService kitchenService;

  @Before
  public void setup() {
     reset(kitchenService);
     when(kitchenService
           .createTicket(eq(1L), eq(1L),                2
                           any(TicketDetails.class)))
           .thenReturn(new Ticket(1L, 1L,
                        new TicketDetails(Collections.emptyList())));
  }

}

  • 1 用 mock 覆盖 kitchenService @Bean的定义
  • 1 Overrides the definition of the kitchenService @Bean with a mock
  • 2 配置 mock 以返回与合约的输出消息匹配的值
  • 2 Configures the mock to return the values that match a contract’s output message

KitchenServiceCommandHandler使用从合约的 input 消息派生的参数调用 Invokes,并创建从 return 派生的回复消息 价值。测试类的方法将 mock 配置为返回与合约的输出消息匹配的值KitchenServicesetup()KitchenService

KitchenServiceCommandHandler invokes KitchenService with arguments that are derived from a contract’s input message and creates a reply message that’s derived from the return value. The test class’s setup() method configures the mock KitchenService to return the values that match the contract’s output message

集成测试和单元测试验证服务的各个部分的行为。集成测试验证服务 可以与其客户端和依赖项通信。单元测试验证服务的逻辑是否正确。两种类型 of test 运行整个服务。为了验证服务作为一个整体是否有效,我们将沿着金字塔向上移动并查看 如何编写组件测试。

Integration tests and unit tests verify the behavior of individual parts of a service. The integration tests verify that services can communicate with their clients and dependencies. The unit tests verify that a service’s logic is correct. Neither type of test runs the entire service. In order to verify that a service as a whole works, we’ll move up the pyramid and look at how to write component tests.

10.2. 开发组件测试

10.2. Developing component tests

到目前为止,我们已经了解了如何测试单个类和类集群。但是想象一下,我们现在想要验证它是否按预期工作。换句话说,我们想要编写服务的验收测试,将其视为一个黑盒并验证 其 API 的行为。一种方法是编写本质上是端到端测试的内容,然后部署 (deploy) 及其所有传递依赖项。正如您现在应该知道的那样,这是一种缓慢、脆弱且昂贵的服务测试方法。Order ServiceOrder Service

So far, we’ve looked at how to test individual classes and clusters of classes. But imagine that we now want to verify that Order Service works as expected. In other words, we want to write the service’s acceptance tests, which treat it as a black box and verify its behavior through its API. One approach is to write what are essentially end-to-end tests and deploy Order Service and all of its transitive dependencies. As you should know by now, that’s a slow, brittle, and expensive way to test a service.

模式:服务组件测试

单独测试服务。请参阅 http://microservices.io/patterns/testing/service-component-test.html

Test a service in isolation. See http://microservices.io/patterns/testing/service-component-test.html.

为服务编写验收测试的更好方法是使用组件测试。如图 10.6 所示,组件测试夹在集成测试和端到端测试之间。组件测试单独验证服务的行为。 它将服务的依赖项替换为模拟其行为的存根。它甚至可能使用基础架构的内存中版本 服务,例如数据库。因此,组件测试更容易编写,运行速度也更快。

A much better way to write acceptance tests for a service is to use component testing. As figure 10.6 shows, component tests are sandwiched between integration tests and end-to-end tests. Component testing verifies the behavior of a service in isolation. It replaces a service’s dependencies with stubs that simulate their behavior. It might even use in-memory versions of infrastructure services such as databases. As a result, component tests are much easier to write and faster to run.

图 10.6.组件测试隔离地测试服务。它通常将存根用于服务的依赖项。

首先,我简要介绍了如何使用名为 Gherkin 的测试 DSL 为服务编写验收测试,例如。之后,我将讨论各种组件测试设计问题。然后,我将演示如何为 编写验收测试。Order ServiceOrder Service

I begin by briefly describing how to use a testing DSL called Gherkin to write acceptance tests for services, such as Order Service. After that I discuss various component testing design issues. I then show how to write acceptance tests for Order Service.

让我们看看使用 Gherkin 编写验收测试。

Let’s look at writing acceptance tests using Gherkin.

10.2.1. 定义验收测试

10.2.1. Defining acceptance tests

验收测试是针对软件组件的面向业务的测试。它们描述了所需的外部可见行为 从组件的 Client 端的角度来看,而不是从内部实现的角度来看。这些测试是派生的 来自用户故事或使用案例。例如,其中一个关键故事是故事:Order ServicePlace Order

Acceptance tests are business-facing tests for a software component. They describe the desired externally visible behavior from the perspective of the component’s clients rather than in terms of the internal implementation. These tests are derived from user stories or use cases. For example, one of the key stories for Order Service is the Place Order story:

As a consumer of the Order Service
I should be able to place an order
As a consumer of the Order Service
I should be able to place an order

我们可以将这个故事扩展到如下场景:

We can expand this story into scenarios such as the following:

Given a valid consumer
Given using a valid credit card
Given the restaurant is accepting orders
When I place an order for Chicken Vindaloo at Ajanta
Then the order should be APPROVED
And an OrderAuthorized event should be published
Given a valid consumer
Given using a valid credit card
Given the restaurant is accepting orders
When I place an order for Chicken Vindaloo at Ajanta
Then the order should be APPROVED
And an OrderAuthorized event should be published

此方案描述了 的 API 所需的行为。Order Service

This scenario describes the desired behavior of Order Service in terms of its API.

每个场景都定义了一个验收测试。givens 对应于测试的 setup 阶段,when 映射到执行阶段,thenand 对应于验证阶段。稍后,您会看到此方案的测试,该测试执行以下操作:

Each scenario defines an acceptance test. The givens correspond to the test’s setup phase, the when maps to the execute phase, and the then and the and to the verification phase. Later, you see a test for this scenario that does the following:

  1. 通过调用终端节点创建一个OrderPOST /orders
  2. Creates an Order by invoking the POST /orders endpoint
  3. 通过调用端点来验证 的状态OrderGET /orders/{orderId}
  4. Verifies the state of the Order by invoking the GET /orders/{orderId} endpoint
  5. 通过订阅相应的消息通道来验证 是否发布了事件Order ServiceOrderAuthorized
  6. Verifies that the Order Service published an OrderAuthorized event by subscribing to the appropriate message channel

我们可以将每个场景转换为 Java 代码。不过,一个更简单的选择是使用 DSL 编写验收测试,例如 饰演 Gherkin。

We could translate each scenario into Java code. An easier option, though, is to write the acceptance tests using a DSL such as Gherkin.

10.2.2. 使用 Gherkin 编写验收测试

10.2.2. Writing acceptance tests using Gherkin

用 Java 编写验收测试很有挑战性。场景和 Java 测试存在分歧的风险。还有 高级场景和 Java 测试之间的脱节,Java 测试由低级实现细节组成。也 存在场景缺乏精确性或不明确且无法转换为 Java 代码的风险。更好的方法 是去掉手动翻译步骤,编写可执行场景。

Writing acceptance tests in Java is challenging. There’s a risk that the scenarios and the Java tests diverge. There’s also a disconnect between the high-level scenarios and the Java tests, which consist of low-level implementation details. Also, there’s a risk that a scenario lacks precision or is ambiguous and can’t be translated into Java code. A much better approach is to eliminate the manual translation step and write executable scenarios.

Gherkin 是用于编写可执行规范的 DSL。使用 Gherkin 时,您可以使用类似 English 的 方案,如前面所示的方案。然后,您使用测试自动化框架 Cucumber 执行规范 对于小黄瓜。Gherkin 和 Cucumber 消除了手动将场景转换为可运行代码的需要。

Gherkin is a DSL for writing executable specifications. When using Gherkin, you define your acceptance tests using English-like scenarios, such as the one shown earlier. You then execute the specifications using Cucumber, a test automation framework for Gherkin. Gherkin and Cucumber eliminate the need to manually translate scenarios into runnable code.

服务的 Gherkin 规范(如 such )由一组功能组成。每个功能都由一组场景描述,例如您之前看到的场景。场景具有 given-when-then 结构。givens 是前提条件,when 是发生的动作或事件,then/and 是预期结果。Order Service

The Gherkin specification for a service such as Order Service consists of a set of features. Each feature is described by a set of scenarios such as the one you saw earlier. A scenario has the given-when-then structure. The givens are the preconditions, the when is the action or event that occurs, and the then/and are the expected outcome.

例如,所需的行为由多个特征定义,包括 、 和 。Listing 10.11 是该功能的摘录。此功能由几个元素组成:Order ServicePlace OrderCancel OrderRevise OrderPlace Order

For example, the desired behavior of Order Service is defined by several features, including Place Order, Cancel Order, and Revise Order. Listing 10.11 is an excerpt of the Place Order feature. This feature consists of several elements:

  • 名称 - 对于此功能,名称为 。Place Order
  • NameFor this feature, the name is Place Order.
  • 规格简介这描述了该功能存在的原因。对于此功能,规范简介是用户情景。
  • Specification briefThis describes why the feature exists. For this feature, the specification brief is the user story.
  • Scenarios 和 .Order authorizedOrder rejected due to expired credit card
  • ScenariosOrder authorized and Order rejected due to expired credit card.

清单 10.11.该功能的 Gherkin 定义及其一些场景Place Order
Feature: Place Order

  As a consumer of the Order Service
  I should be able to place an order

  Scenario: Order authorized
    Given a valid consumer
    Given using a valid credit card
    Given the restaurant is accepting orders
    When I place an order for Chicken Vindaloo at Ajanta
    Then the order should be APPROVED
    And an OrderAuthorized event should be published

  Scenario: Order rejected due to expired credit card
    Given a valid consumer
    Given using an expired credit card
    Given the restaurant is accepting orders
    When I place an order for Chicken Vindaloo at Ajanta
    Then the order should be REJECTED
    And an OrderRejected event should be published

...
Feature: Place Order

  As a consumer of the Order Service
  I should be able to place an order

  Scenario: Order authorized
    Given a valid consumer
    Given using a valid credit card
    Given the restaurant is accepting orders
    When I place an order for Chicken Vindaloo at Ajanta
    Then the order should be APPROVED
    And an OrderAuthorized event should be published

  Scenario: Order rejected due to expired credit card
    Given a valid consumer
    Given using an expired credit card
    Given the restaurant is accepting orders
    When I place an order for Chicken Vindaloo at Ajanta
    Then the order should be REJECTED
    And an OrderRejected event should be published

...

在这两种情况下,消费者都会尝试下订单。在第一种情况下,他们成功了。在第二种情况下, 订单被拒绝,因为消费者的信用卡已过期。有关 Gherkin 的更多信息,请参阅 Kamil Nicieja 的《编写出色的规范:通过示例和小黄瓜使用规范》一书(Manning,2017 年)。

In both scenarios, a consumer attempts to place an order. In the first scenario, they succeed. In the second scenario, the order is rejected because the consumer’s credit card has expired. For more information on Gherkin, see the book Writing Great Specifications: Using Specification by Example and Gherkin by Kamil Nicieja (Manning, 2017).

使用 Cucumber 执行 Gherkin 规范

Cucumber 是一个自动化测试框架,用于执行用 Gherkin 编写的测试。它有多种语言版本, 包括 Java。使用 Cucumber for Java 时,您需要编写一个步骤定义类,如清单 10.12 中所示的类。步骤定义类由定义每个 given-then-when 步骤含义的方法组成。每个步骤定义方法都用 、 、 或 进行批注。这些注释中的每一个都有一个作为正则表达式的元素,Cucumber 将其与步骤进行匹配。@Given@When@Then@Andvalue

Cucumber is an automated testing framework that executes tests written in Gherkin. It’s available in a variety of languages, including Java. When using Cucumber for Java, you write a step definition class, such as the one shown in listing 10.12. A step definition class consists of methods that define the meaning of each given-then-when step. Each step definition method is annotated with either @Given, @When, @Then, or @And. Each of these annotations has a value element that’s a regular expression, which Cucumber matches against the steps.

清单 10.12.Java 步骤定义类使 Gherkin 场景可执行。
public class StepDefinitions ...  {

  ...

  @Given("A valid consumer")
  public void useConsumer() { ... }

  @Given("using a(.?) (.*) credit card")
  public void useCreditCard(String ignore, String creditCard) { ... }

  @When("I place an order for Chicken Vindaloo at Ajanta")
  public void placeOrder() { ... }

  @Then("the order should be (.*)")
  public void theOrderShouldBe(String desiredOrderState) { ... }

  @And("an (.*) event should be published")
  public void verifyEventPublished(String expectedEventClass)  { ... }

}
public class StepDefinitions ...  {

  ...

  @Given("A valid consumer")
  public void useConsumer() { ... }

  @Given("using a(.?) (.*) credit card")
  public void useCreditCard(String ignore, String creditCard) { ... }

  @When("I place an order for Chicken Vindaloo at Ajanta")
  public void placeOrder() { ... }

  @Then("the order should be (.*)")
  public void theOrderShouldBe(String desiredOrderState) { ... }

  @And("an (.*) event should be published")
  public void verifyEventPublished(String expectedEventClass)  { ... }

}

每种类型的方法都是测试的特定阶段的一部分:

Each type of method is part of a particular phase of the test:

  • @Given- 设置阶段
  • @Given—The setup phase
  • @When- 执行阶段
  • @When—The execute phase
  • @Then — 验证阶段@And
  • @Then and @And—The verification phase

在后面的第 10.2.4 节中,当我更详细地描述这个类时,您将看到其中许多方法都会对 .例如,该方法通过调用 REST 终端节点来创建。该方法通过调用 .Order ServiceplaceOrder()OrderPOST /orderstheOrderShouldBe()GET /orders/{orderId}

Later in section 10.2.4, when I describe this class in more detail, you’ll see that many of these methods make REST calls to Order Service. For example, the placeOrder() method creates Order by invoking the POST /orders REST endpoint. The theOrderShouldBe() method verifies the status of the order by invoking GET /orders/{orderId}.

但在详细介绍如何编写 step 类之前,让我们先探讨一下组件测试的一些设计问题。

But before getting into the details of how to write step classes, let’s explore some design issues with component tests.

10.2.3. 设计组件测试

10.2.3. Designing component tests

假设您正在实现 的组件测试 .Section 10.2.2 展示了如何使用 Gherkin 指定所需的行为并使用 Cucumber 执行它。但在组件测试可以执行之前 Gherkin 场景,它必须首先运行并设置服务的依赖项。您需要单独进行测试,因此组件测试必须为多个服务配置 stub,包括 .它还需要设置数据库和消息传递基础结构。有几种不同的选项可以权衡现实主义 快速、简单。Order ServiceOrder ServiceOrder ServiceKitchen Service

Imagine you’re implementing the component tests for Order Service. Section 10.2.2 shows how to specify the desired behavior using Gherkin and execute it using Cucumber. But before a component test can execute the Gherkin scenarios, it must first run Order Service and set up the service’s dependencies. You need to test Order Service in isolation, so the component test must configure stubs for several services, including Kitchen Service. It also needs to set up a database and the messaging infrastructure. There are a few different options that trade off realism with speed and simplicity.

进程内组件测试

一种选择是编写进程内组件测试。进程内组件测试使用内存中存根运行服务,并针对其依赖项进行模拟。例如,您可以为 Spring 编写一个组件测试 使用 Spring Boot 测试框架的基于 Boot 的服务。带有 注释的测试类在与测试相同的 JVM 中运行服务。它使用依赖关系注入将服务配置为使用 mock 和 stub。 例如,的测试会将其配置为使用内存中的 JDBC 数据库(如 H2、HSQLDB 或 Derby)和 Eventuate Tram 的内存中存根。 进程内测试编写起来更简单、速度更快,但缺点是不测试可部署的服务。@SpringBootTestOrder Service

One option is to write in-process component tests. An in-process component test runs the service with in-memory stubs and mocks for its dependencies. For example, you can write a component test for a Spring Boot-based service using the Spring Boot testing framework. A test class, which is annotated with @SpringBootTest, runs the service in the same JVM as the test. It uses dependency injection to configure the service to use mocks and stubs. For instance, a test for Order Service would configure it to use an in-memory JDBC database, such as H2, HSQLDB, or Derby, and in-memory stubs for Eventuate Tram. In-process tests are simpler to write and faster, but have the downside of not testing the deployable service.

进程外组件测试

更现实的方法是将服务打包为生产就绪格式,并将其作为单独的进程运行。例如,第 12 章介绍了将服务打包为 Docker 容器镜像越来越普遍。进程外组件测试使用真实的基础设施服务,例如数据库和消息代理,但对应用程序的任何依赖项使用存根 服务业。例如,的进程外组件测试将使用 MySQL 和 Apache Kafka,以及包括 和 在内的服务的存根。由于使用消息传递与这些服务交互,因此这些存根将使用来自 Apache Kafka 的消息并发送回回复消息。FTGO Order ServiceConsumer ServiceAccounting ServiceOrder Service

A more realistic approach is to package the service in a production-ready format and run it as a separate process. For example, chapter 12 explains that it’s increasingly common to package services as Docker container images. An out-of-process component test uses real infrastructure services, such as databases and message brokers, but uses stubs for any dependencies that are application services. For example, an out-of-process component test for FTGO Order Service would use MySQL and Apache Kafka, and stubs for services including Consumer Service and Accounting Service. Because Order Service interacts with those services using messaging, these stubs would consume messages from Apache Kafka and send back reply messages.

进程外组件测试的一个关键好处是它提高了测试覆盖率,因为被测试的内容更接近 到正在部署的内容。缺点是这种类型的测试编写起来更复杂,执行速度更慢,并且可能 比正在进行的组件测试更脆。您还必须弄清楚如何对应用程序服务进行存根。让我们看看 了解如何做到这一点。

A key benefit of out-of-process component testing is that it improves test coverage, because what’s being tested is much closer to what’s being deployed. The drawback is that this type of test is more complex to write, slower to execute, and potentially more brittle than an in-process component test. You also have to figure out how to stub the application services. Let’s look at how to do that.

如何在进程外组件测试中存根服务

受测服务通常使用涉及发回响应的交互样式来调用依赖项。,例如,使用异步请求/响应,并向各种服务发送命令消息。 使用 HTTP,这是一种请求 / 响应交互样式。进程外测试必须为这些类型的 dependencies,用于处理请求并发回回复。Order ServiceAPI Gateway

The service under test often invokes dependencies using interaction styles that involve sending back a response. Order Service, for example, uses asynchronous request/response and sends command messages to various services. API Gateway uses HTTP, which is a request/response interaction style. An out-of-process test must configure stubs for these kinds of dependencies, which handle requests and send back replies.

一种选择是使用 Spring Cloud Contract,我们在前面的 10.1 节讨论集成测试时已经看过了。我们可以编写为组件测试配置 stub 的 contract。需要考虑的一件事, 不过,与用于集成的 Contract 不同,这些 Contract 很可能只由组件测试使用。

One option is to use Spring Cloud Contract, which we looked at earlier in section 10.1 when discussing integration tests. We could write contracts that configure stubs for component tests. One thing to consider, though, is that it’s likely that these contracts, unlike those used for integration, would only be used by the component tests.

使用 Spring Cloud Contract 进行组件测试的另一个缺点是,因为它的重点是消费者 Contract 测试, 它采取了一种有点重量级的方法。包含 Contract 的 JAR 文件必须部署在 Maven 存储库中,而不是 而不仅仅是在 Classpath 上。处理涉及动态生成值的交互也具有挑战性。因此 一个更简单的选项是从测试本身中配置 stubs。

Another drawback of using Spring Cloud Contract for component testing is that because its focus is consumer contract testing, it takes a somewhat heavyweight approach. The JAR files containing the contracts must be deployed in a Maven repository rather than merely being on the classpath. Handling interactions involving dynamically generated values is also challenging. Consequently, a simpler option is to configure stubs from within the test itself.

例如,测试可以使用 WireMock 存根 DSL 配置 HTTP 存根。同样,对使用 Eventuate Tram 消息传递可以配置消息传递存根。在本节的后面部分,我将介绍一个易于使用的 Java 库,该库可以 这。

A test can, for example, configure an HTTP stub using the WireMock stubbing DSL. Similarly, a test for a service that uses Eventuate Tram messaging can configure messaging stubs. Later in this section I show an easy-to-use Java library that does this.

现在我们已经了解了如何设计组件测试,让我们考虑如何为 FTGO 编写组件测试。Order Service

Now that we’ve looked at how to design component tests, let’s consider how to write component tests for the FTGO Order Service.

10.2.4. 为 FTGO Order Service 编写组件测试

10.2.4. Writing component tests for the FTGO Order Service

正如您在本节前面看到的,有几种不同的方法可以实现组件测试。本节介绍 组件测试使用进程外策略来测试作为 Docker 容器运行的服务。您将看到测试如何使用 Gradle 插件来启动和停止 Docker 容器。我将讨论如何使用 Cucumber 执行基于 Gherkin 的场景,这些场景定义了 的所需行为。Order ServiceOrder Service

As you saw earlier in this section, there are a few different ways to implement component tests. This section describes the component tests for Order Service that use the out-of-process strategy to test the service running as a Docker container. You’ll see how the tests use a Gradle plugin to start and stop the Docker container. I discuss how to use Cucumber to execute the Gherkin-based scenarios that define the desired behavior for Order Service.

图 10.7 显示了 的组件测试的设计。 是运行 Cucumber 的测试类:Order ServiceOrderServiceComponentTest

Figure 10.7 shows the design of the component tests for Order Service. OrderServiceComponentTest is the test class that runs Cucumber:

@RunWith(Cucumber.class)
@CucumberOptions(features = "src/component-test/resources/features")
public class OrderServiceComponentTest {
}
@RunWith(Cucumber.class)
@CucumberOptions(features = "src/component-test/resources/features")
public class OrderServiceComponentTest {
}
图 10.7.的组件测试使用 Cucumber 测试框架来执行使用 Gherkin 验收测试 DSL 编写的测试场景。测试使用 Docker 与其基础设施服务(如 Apache Kafka 和 MySQL)一起运行。Order ServiceOrder Service

它有一个注释,用于指定在何处查找 Gherkin 功能文件。它还用 注释了 ,它告诉 JUNIT 使用 Cucumber 测试运行程序。但与典型的基于 JUNIT 的测试类不同,它没有任何测试 方法。相反,它通过读取 Gherkin 功能来定义测试,并使用该类使测试可执行。@CucumberOptions@RunWith(Cucumber.class)OrderServiceComponentTestStepDefinitions

It has an @CucumberOptions annotation that specifies where to find the Gherkin feature files. It’s also annotated with @RunWith(Cucumber.class), which tells JUNIT to use the Cucumber test runner. But unlike a typical JUNIT-based test class, it doesn’t have any test methods. Instead, it defines the tests by reading the Gherkin features and uses the OrderServiceComponentTestStepDefinitions class to make them executable.

将 Cucumber 与 Spring Boot 测试框架一起使用需要一个稍微不寻常的结构。尽管不是测试类,但仍使用 Comments 进行 ,这是 Spring 测试框架的一部分。它创建 Spring ,它定义了各种 Spring 组件,包括消息传递存根。让我们看看步骤定义的详细信息。OrderServiceComponentTestStepDefinitions@ContextConfigurationApplicationContext

Using Cucumber with the Spring Boot testing framework requires a slightly unusual structure. Despite not being a test class, OrderServiceComponentTestStepDefinitions is still annotated with @ContextConfiguration, which is part of the Spring Testing framework. It creates Spring ApplicationContext, which defines the various Spring components, including messaging stubs. Let’s look at the details of the step definitions.

OrderServiceComponentTestStepDefinitions 类

类是测试的核心。此类定义 组件测试中每个步骤的含义。下面的清单显示了定义步骤含义的方法。OrderServiceComponentTestStepDefinitionsOrder ServiceusingCreditCard()Given using ... credit card

The OrderServiceComponentTestStepDefinitions class is the heart of the tests. This class defines the meaning of each step in Order Service’s component tests. The following listing shows the usingCreditCard() method, which defines the meaning of the Given using ... credit card step.

清单 10.13.该方法定义步骤的含义。@GivenuseCreditCard()Given using ... credit card
@ContextConfiguration(classes =
     OrderServiceComponentTestStepDefinitions.TestConfiguration.class)
public class OrderServiceComponentTestStepDefinitions {

  ...

  @Autowired
  protected SagaParticipantStubManager sagaParticipantStubManager;

  @Given("using a(.?) (.*) credit card")
  public void useCreditCard(String ignore, String creditCard) {
    if (creditCard.equals("valid"))
      sagaParticipantStubManager                                1
            .forChannel("accountingService")
            .when(AuthorizeCommand.class).replyWithSuccess();
    else if (creditCard.equals("invalid"))
      sagaParticipantStubManager                                2
               .forChannel("accountingService")
              .when(AuthorizeCommand.class).replyWithFailure();
    else
      fail("Don't know what to do with this credit card");
  }
@ContextConfiguration(classes =
     OrderServiceComponentTestStepDefinitions.TestConfiguration.class)
public class OrderServiceComponentTestStepDefinitions {

  ...

  @Autowired
  protected SagaParticipantStubManager sagaParticipantStubManager;

  @Given("using a(.?) (.*) credit card")
  public void useCreditCard(String ignore, String creditCard) {
    if (creditCard.equals("valid"))
      sagaParticipantStubManager                                1
            .forChannel("accountingService")
            .when(AuthorizeCommand.class).replyWithSuccess();
    else if (creditCard.equals("invalid"))
      sagaParticipantStubManager                                2
               .forChannel("accountingService")
              .when(AuthorizeCommand.class).replyWithFailure();
    else
      fail("Don't know what to do with this credit card");
  }

  • 1 发送成功回复。
  • 1 Send a success reply.
  • 2 发送失败回复。
  • 2 Send a failure reply.

此方法使用 class ,这是一个为 saga 参与者配置存根的测试帮助程序类。该方法使用它来配置存根以回复成功或失败消息,具体取决于指定的信用卡。SagaParticipantStubManageruseCreditCard()Accounting Service

This method uses the SagaParticipantStubManager class, a test helper class that configures stubs for saga participants. The useCreditCard() method uses it to configure the Accounting Service stub to reply with either a success or a failure message, depending on the specified credit card.

下面的清单显示了定义步骤的方法。它调用 REST API 来创建并保存响应,以便在后续步骤中进行验证。placeOrder()When I place an order for Chicken Vindaloo at AjantaOrder ServiceOrder

The following listing shows the placeOrder() method, which defines the When I place an order for Chicken Vindaloo at Ajanta step. It invokes the Order Service REST API to create Order and saves the response for validation in a later step.

清单 10.14.该方法定义步骤。placeOrder()When I place an order for Chicken Vindaloo at Ajanta
@ContextConfiguration(classes =
     OrderServiceComponentTestStepDefinitions.TestConfiguration.class)
public class OrderServiceComponentTestStepDefinitions {

  private int port = 8082;
  private String host = System.getenv("DOCKER_HOST_IP");

  protected String baseUrl(String path) {
    return String.format("http://%s:%s%s", host, port, path);
  }

  private Response response;

  @When("I place an order for Chicken Vindaloo at Ajanta")
  public void placeOrder() {

    response = given().                                               1
            body(new CreateOrderRequest(consumerId,
                    RestaurantMother.AJANTA_ID, Collections.singletonList(
                        new CreateOrderRequest.LineItem(
                           RestaurantMother.CHICKEN_VINDALOO_MENU_ITEM_ID,
                          OrderDetailsMother.CHICKEN_VINDALOO_QUANTITY)))).
            contentType("application/json").
            when().
            post(baseUrl("/orders"));
  }
@ContextConfiguration(classes =
     OrderServiceComponentTestStepDefinitions.TestConfiguration.class)
public class OrderServiceComponentTestStepDefinitions {

  private int port = 8082;
  private String host = System.getenv("DOCKER_HOST_IP");

  protected String baseUrl(String path) {
    return String.format("http://%s:%s%s", host, port, path);
  }

  private Response response;

  @When("I place an order for Chicken Vindaloo at Ajanta")
  public void placeOrder() {

    response = given().                                               1
            body(new CreateOrderRequest(consumerId,
                    RestaurantMother.AJANTA_ID, Collections.singletonList(
                        new CreateOrderRequest.LineItem(
                           RestaurantMother.CHICKEN_VINDALOO_MENU_ITEM_ID,
                          OrderDetailsMother.CHICKEN_VINDALOO_QUANTITY)))).
            contentType("application/json").
            when().
            post(baseUrl("/orders"));
  }

  • 1 调用 Order Service REST API 创建 Order
  • 1 Invokes the Order Service REST API to create Order

help 方法返回 order 服务的 URL。baseUrl()

The baseUrl() help method returns the URL of the order service.

清单 10.15 显示了定义步骤含义的方法。它会验证已成功创建以及它是否处于预期状态。theOrderShouldBe()Then the order should be ...Order

Listing 10.15 shows the theOrderShouldBe() method, which defines the meaning of the Then the order should be ... step. It verifies that Order was successfully created and that it’s in the expected state.

清单 10.15.该方法验证 HTTP 请求是否成功。@ThentheOrderShouldBe()
@ContextConfiguration(classes =
     OrderServiceComponentTestStepDefinitions.TestConfiguration.class)
public class OrderServiceComponentTestStepDefinitions {

  @Then("the order should be (.*)")
  public void theOrderShouldBe(String desiredOrderState) {

    Integer orderId =                                     1
             this.response. then(). statusCode(200).
                    extract(). path("orderId");

    assertNotNull(orderId);

    eventually(() -> {
      String state = given().
              when().
              get(baseUrl("/orders/" + orderId)).
              then().
              statusCode(200)
              .extract().
                      path("state");
      assertEquals(desiredOrderState, state);             2
     });

  }
]
@ContextConfiguration(classes =
     OrderServiceComponentTestStepDefinitions.TestConfiguration.class)
public class OrderServiceComponentTestStepDefinitions {

  @Then("the order should be (.*)")
  public void theOrderShouldBe(String desiredOrderState) {

    Integer orderId =                                     1
             this.response. then(). statusCode(200).
                    extract(). path("orderId");

    assertNotNull(orderId);

    eventually(() -> {
      String state = given().
              when().
              get(baseUrl("/orders/" + orderId)).
              then().
              statusCode(200)
              .extract().
                      path("state");
      assertEquals(desiredOrderState, state);             2
     });

  }
]

  • 1 验证 Order 是否已成功创建。
  • 1 Verify that Order was created successfully.
  • 2 验证 Order 的状态。
  • 2 Verify the state of Order.

预期状态的断言包装在对 的调用中,该调用重复执行断言。eventually()

The assertion of the expected state is wrapped in a call to eventually(), which repeatedly executes the assertion.

下面的清单显示了定义步骤的方法。它验证预期的域事件是否已发布。verifyEventPublished()And an ... event should be published

The following listing shows the verifyEventPublished() method, which defines the And an ... event should be published step. It verifies that the expected domain event was published.

清单 10.16.组件 tests 的 Cucumber 步骤定义类Order Service
@ContextConfiguration(classes =
     OrderServiceComponentTestStepDefinitions.TestConfiguration.class)
public class OrderServiceComponentTestStepDefinitions {

  @Autowired
  protected MessageTracker messageTracker;

  @And("an (.*) event should be published")
  public void verifyEventPublished(String expectedEventClass) throws ClassNot
     FoundException {
    messageTracker.assertDomainEventPublished("net.chrisrichardson.ftgo.order
     service.domain.Order",
            (Class<DomainEvent>)Class.forName("net.chrisrichardson.ftgo.order
     service.domain." + expectedEventClass));
  }
  ....
}
@ContextConfiguration(classes =
     OrderServiceComponentTestStepDefinitions.TestConfiguration.class)
public class OrderServiceComponentTestStepDefinitions {

  @Autowired
  protected MessageTracker messageTracker;

  @And("an (.*) event should be published")
  public void verifyEventPublished(String expectedEventClass) throws ClassNot
     FoundException {
    messageTracker.assertDomainEventPublished("net.chrisrichardson.ftgo.order
     service.domain.Order",
            (Class<DomainEvent>)Class.forName("net.chrisrichardson.ftgo.order
     service.domain." + expectedEventClass));
  }
  ....
}

该方法使用类,该类是一个测试帮助程序类,用于记录测试期间已发布的事件。this class 并由 class.verifyEventPublished()MessageTrackerSagaParticipantStubManagerTestConfiguration@Configuration

The verifyEventPublished() method uses the MessageTracker class, a test helper class that records the events that have been published during the test. This class and SagaParticipantStubManager are instantiated by the TestConfiguration@Configuration class.

现在我们已经了解了步骤定义,让我们看看如何运行组件测试。

Now that we’ve looked at the step definitions, let’s look at how to run the component tests.

运行组件测试

由于这些测试相对较慢,因此我们不想将它们作为 的一部分运行。相反,我们将测试代码放在一个单独的目录中,并使用 .查看该文件以查看 Gradle 配置。./gradlew testsrc/component-test/java./gradlew componentTestftgo-order-service/build.gradle

Because these tests are relatively slow, we don’t want to run them as part of ./gradlew test. Instead, we’ll put the test code in a separate src/component-test/java directory and run them using ./gradlew componentTest. Take a look at the ftgo-order-service/build.gradle file to see the Gradle configuration.

测试使用 Docker 运行及其依赖项。如第 12 章所述,Docker 容器是一种轻量级操作系统虚拟化机制,允许您在 一个孤立的沙箱。Docker Compose 是一个非常有用的工具,您可以使用它定义一组容器并启动和 将它们作为一个整体停止。FTGO 应用程序在根目录中有一个文件,用于定义所有服务和基础设施服务的容器。Order Servicedocker-compose

The tests use Docker to run Order Service and its dependencies. As described in chapter 12, a Docker container is a lightweight operating system virtualization mechanism that lets you deploy a service instance in an isolated sandbox. Docker Compose is an extremely useful tool with which you can define a set of containers and start and stop them as a unit. The FTGO application has a docker-compose file in the root directory that defines containers for all the services, and the infrastructure service.

我们可以使用 Gradle Docker Compose 插件在执行测试之前运行容器,并停止容器一次 测试完成:

We can use the Gradle Docker Compose plugin to run the containers before executing the tests and stop the containers once the tests complete:

apply plugin: 'docker-compose'

dockerCompose.isRequiredBy(componentTest)
componentTest.dependsOn(assemble)

dockerCompose {
   startedServices = [ 'ftgo-order-service']
}
apply plugin: 'docker-compose'

dockerCompose.isRequiredBy(componentTest)
componentTest.dependsOn(assemble)

dockerCompose {
   startedServices = [ 'ftgo-order-service']
}

前面的 Gradle 配置代码段执行了两项操作。首先,它将 Gradle Docker Compose 插件配置为运行 在组件测试并启动它所依赖的基础设施服务之前。其次,它配置为 dependon,以便首先构建 Docker 镜像所需的 JAR 文件。有了这些,我们就可以使用 以下命令:Order ServicecomponentTestassemble

The preceding snippet of Gradle configuration does two things. First, it configures the Gradle Docker Compose plugin to run before the component tests and start Order Service along with the infrastructure services that it’s configured to depend on. Second, it configures componentTest to depend on assemble so that the JAR file required by the Docker image is built first. With that in place, we can run these component tests with the following commands:

./gradlew  :ftgo-order-service:componentTest
./gradlew  :ftgo-order-service:componentTest

这些命令需要几分钟时间,可执行以下操作:

Those commands, which take a couple of minutes, perform the following actions:

  1. 建。Order Service
  2. Build Order Service.
  3. 运行服务及其基础设施服务。
  4. Run the service and its infrastructure services.
  5. 运行测试。
  6. Run the tests.
  7. 停止正在运行的服务。
  8. Stop the running services.

现在我们已经了解了如何单独测试服务,接下来我们将了解如何测试整个应用程序。

Now that we’ve looked at how to test a service in isolation, we’ll see how to test the entire application.

10.3. 编写端到端测试

10.3. Writing end-to-end tests

组件测试分别测试每个服务。但是,端到端测试会测试整个应用程序。如图 10.8 所示,端到端测试是测试金字塔的顶部。那是因为这类测试——现在跟我说吧——很慢, 易碎,开发耗时。

Component testing tests each service separately. End-to-end testing, though, tests the entire application. As figure 10.8 shows, end-to-end testing is the top of the test pyramid. That’s because these kinds of tests are—say it with me now—slow, brittle, and time consuming to develop.

图 10.8.端到端测试位于测试金字塔的顶部。它们发育缓慢、易碎且耗时。您应该最小化 端到端测试的数量。

端到端测试具有大量的移动部件。您必须部署多个服务及其支持基础设施 服务业。因此,端到端测试速度很慢。此外,如果您的测试需要部署大量服务,则有一个 很有可能其中一个部署失败,使测试不可靠。因此,您应该尽量减少端到端 测试。

End-to-end tests have a large number of moving parts. You must deploy multiple services and their supporting infrastructure services. As a result, end-to-end tests are slow. Also, if your test needs to deploy a large number of services, there’s a good chance one of them will fail to deploy, making the tests unreliable. Consequently, you should minimize the number of end-to-end tests.

10.3.1. 设计端到端测试

10.3.1. Designing end-to-end tests

正如我所解释的,最好尽可能少地编写这些。一个好的策略是编写用户旅程测试。用户旅程测试对应于用户在整个系统中的旅程。例如,与其测试 create order、revise order 和 cancel order 中,您可以编写一个执行所有这三个操作的测试。这种方法显著减少了测试次数 您必须编写并缩短测试执行时间。

As I’ve explained, it’s best to write as few of these as possible. A good strategy is to write user journey tests. A user journey test corresponds to a user’s journey through the system. For example, rather than test create order, revise order, and cancel order separately, you can write a single test that does all three. This approach significantly reduces the number of tests you must write and shortens the test execution time.

10.3.2. 编写端到端测试

10.3.2. Writing end-to-end tests

Section 10.2 中介绍的验收测试一样,端到端测试是面向业务的测试。用业务人员可以理解的高级 DSL 编写它们是有意义的。您可以 例如,使用 Gherkin 编写端到端测试,并使用 Cucumber 执行它们。下面的清单显示了一个示例 这样的测试。它类似于我们之前看到的验收测试。主要区别在于,此测试具有多个操作,而不是单个 。Then

End-to-end tests are, like the acceptance tests covered in section 10.2, business-facing tests. It makes sense to write them in a high-level DSL that’s understood by the business people. You can, for example, write the end-to-end tests using Gherkin and execute them using Cucumber. The following listing shows an example of such a test. It’s similar to the acceptance tests we looked at earlier. The main difference is that rather than a single Then, this test has multiple actions.

清单 10.17.基于 Gherkin 的用户旅程规范
Feature: Place Revise and Cancel

  As a consumer of the Order Service
  I should be able to place, revise, and cancel an order

  Scenario: Order created, revised, and cancelled
    Given a valid consumer
    Given using a valid credit card
    Given the restaurant is accepting orders
    When I place an order for Chicken Vindaloo at Ajanta          1
     Then the order should be APPROVED
    Then the order total should be 16.33
    And when I revise the order by adding 2 vegetable samosas     2
     Then the order total should be 20.97
    And when I cancel the order
    Then the order should be CANCELLED                            3
Feature: Place Revise and Cancel

  As a consumer of the Order Service
  I should be able to place, revise, and cancel an order

  Scenario: Order created, revised, and cancelled
    Given a valid consumer
    Given using a valid credit card
    Given the restaurant is accepting orders
    When I place an order for Chicken Vindaloo at Ajanta          1
     Then the order should be APPROVED
    Then the order total should be 16.33
    And when I revise the order by adding 2 vegetable samosas     2
     Then the order total should be 20.97
    And when I cancel the order
    Then the order should be CANCELLED                            3

  • 1 创建订单。
  • 1 Create Order.
  • 2 修改顺序。
  • 2 Revise Order.
  • 3 取消订单。
  • 3 Cancel Order.

此方案下订单,修改订单,然后取消订单。让我们看看如何运行它。

This scenario places an order, revises it, and then cancels it. Let’s look at how to run it.

10.3.3. 运行端到端测试

10.3.3. Running end-to-end tests

端到端测试必须运行整个应用程序,包括任何必需的基础设施服务。正如您在前面的 10.2 节 中看到的那样,Gradle Docker Compose 插件提供了一种便捷的方法来执行此操作。而不是运行单个应用程序服务, 但是,Docker Compose 文件运行应用程序的所有服务。

End-to-end tests must run the entire application, including any required infrastructure services. As you saw in earlier in section 10.2, the Gradle Docker Compose plugin provides a convenient way to do this. Instead of running a single application service, though, the Docker Compose file runs all the application’s services.

现在我们已经了解了设计和编写端到端测试的不同方面,让我们看看一个端到端测试示例。

Now that we’ve looked at different aspects of designing and writing end-to-end tests, let’s see an example end-to-end test.

该模块实现 FTGO 应用程序的端到端测试。端到端测试的实现非常相似 到前面 10.2 节中讨论的组件测试的实现。这些测试是使用 Gherkin 编写的,并使用 Cucumber 执行。Gradle Docker Compose 插件运行容器 在测试运行之前。启动容器并运行测试大约需要 4 到 5 分钟。ftgo-end-to-end-test

The ftgo-end-to-end-test module implements the end-to-end tests for the FTGO application. The implementation of the end-to-end test is quite similar to the implementation of the component tests discussed earlier in section 10.2. These tests are written using Gherkin and executed using Cucumber. The Gradle Docker Compose plugin runs the containers before the tests run. It takes around four to five minutes to start the containers and run the tests.

这似乎不是很长的时间,但这是一个相对简单的应用程序,只有少量容器和测试。 想象一下,如果有数百个容器和更多的测试。测试可能需要相当长的时间。因此,它是 最好专注于编写金字塔底层的测试。

That may not seem like a long time, but this is a relatively simple application with just a handful of containers and tests. Imagine if there were hundreds of containers and many more tests. The tests could take quite a long time. Consequently, it’s best to focus on writing tests that are lower down the pyramid.

总结

Summary

  • 使用协定(示例消息)来驱动服务之间交互的测试。而不是编写运行缓慢 同时运行服务及其传递依赖项的测试会编写测试来验证这两个服务的适配器 遵守合同。
  • Use contracts, which are example messages, to drive the testing of interactions between services. Rather than write slow-running tests that run both services and their transitive dependencies, write tests that verify that the adapters of both services conform to the contracts.
  • 编写组件测试以通过其 API 验证服务的行为。您应该通过以下方式简化和加速组件测试 隔离测试服务,对 stub for its dependencies 使用存根。
  • Write component tests to verify the behavior of a service via its API. You should simplify and speed up component tests by testing a service in isolation, using stubs for its dependencies.
  • 编写用户旅程测试以最大限度地减少端到端测试的数量,这些测试速度慢、易碎且耗时。用户旅程 test 模拟用户在整个应用程序中的旅程,并验证相对较大的 应用程序的功能。由于测试很少,因此每个测试的开销(例如测试设置)的数量被最小化。 这加快了测试速度。
  • Write user journey tests to minimize the number of end-to-end tests, which are slow, brittle, and time consuming. A user journey test simulates a user’s journey through the application and verifies high-level behavior of a relatively large slice of the application’s functionality. Because there are few tests, the amount of per-test overhead, such as test setup, is minimized, which speeds up the tests.

第 11 章.开发生产就绪型 Service

Chapter 11. Developing production-ready services

本章涵盖

This chapter covers

  • 开发安全服务
  • Developing secure services
  • 应用 Externalized 配置模式
  • Applying the Externalized configuration pattern
  • 应用可观测性模式:

    • 健康检查 API
    • 日志聚合
    • 分布式跟踪
    • 异常跟踪
    • 应用程序指标
    • 审计日志记录
  • Applying the observability patterns:

    • Health check API
    • Log aggregation
    • Distributed tracing
    • Exception tracking
    • Application metrics
    • Audit logging
  • 通过应用 Microservice chassis 模式简化服务开发
  • Simplifying the development of services by applying the Microservice chassis pattern

Mary 和她的团队觉得他们已经掌握了服务分解、服务间通信、事务管理、查询 以及业务逻辑设计和测试。他们有信心开发满足其功能要求的服务。 但是,为了让服务准备好部署到生产环境中,他们需要确保它还满足三个 至关重要的质量属性:安全性、可配置性和可观察性。

Mary and her team felt that they had mastered service decomposition, interservice communication, transaction management, querying and business logic design, and testing. They were confident that they could develop services that met their functional requirements. But in order for a service to be ready to be deployed into production, they needed to ensure that it would also satisfy three critically important quality attributes: security, configurability, and observability.

第一个质量属性是应用程序安全性。开发安全的应用程序至关重要,除非您希望您的公司成为数据泄露的头条新闻。幸运 微服务架构中安全性的大多数方面与整体式应用程序中的安全性没有任何不同。FTGO 团队 知道他们在开发单体式应用过程中学到的大部分知识也适用于微服务。但是微服务 体系结构迫使您以不同的方式实现应用程序级安全性的某些方面。例如,您需要实现 一种将用户身份从一个服务传递到另一个服务的机制。

The first quality attribute is application security. It’s essential to develop secure applications, unless you want your company to be in the headlines for a data breach. Fortunately, most aspects of security in a microservice architecture are not any different than in a monolithic application. The FTGO team knew that much of what they had learned over the years developing the monolith also applied to microservices. But the microservice architecture forces you to implement some aspects of application-level security differently. For example, you need to implement a mechanism to pass the identity of the user from one service to another.

您必须解决的第二个质量属性是服务可配置性。服务通常使用一个或多个外部服务,例如消息代理和数据库。网络位置和 每个外部服务的凭证通常取决于服务运行的环境。您不能硬连线 configuration 属性添加到服务中。相反,您必须使用提供服务的外部化配置机制 在运行时使用配置属性。

The second quality attribute you must address is service configurability. A service typically uses one or more external services, such as message brokers and databases. The network location and credentials of each external service often depend on the environment that the service is running in. You can’t hard-wire the configuration properties into the service. Instead, you must use an externalized configuration mechanism that provides a service with configuration properties at runtime.

第三个质量属性是可观测性。FTGO 团队已经为现有应用程序实施了监控和日志记录。但是微服务架构是一个 分布式系统,这带来了一些额外的挑战。每个请求都由 API 网关处理,并且至少 一项服务。例如,假设您正在尝试确定 6 项服务中的哪一项导致了延迟问题。或者想象一下 尝试了解当日志条目分散在 5 个不同的服务中时如何处理请求。挨次 为了更轻松地了解应用程序的行为和解决问题,您必须实现多个可观察性 模式。

The third quality attribute is observability. The FTGO team had implemented monitoring and logging for the existing application. But a microservice architecture is a distributed system, and that presents some additional challenges. Every request is handled by the API gateway and at least one service. Imagine, for example, that you’re trying to determine which of six services is causing a latency issue. Or imagine trying to understand how a request is handled when the log entries are scattered across five different services. In order to make it easier to understand the behavior of your application and troubleshoot problems, you must implement several observability patterns.

本章开始时,我将介绍如何在微服务架构中实现安全性。接下来,我将讨论如何设计 可配置的服务。我将介绍几种不同的服务配置机制。之后,我谈谈 通过使用可观测性模式使您的服务更易于理解和故障排除。我通过展示来结束本章 如何通过在微服务机箱上开发服务来简化这些问题和其他问题的实现 框架。

I begin this chapter by describing how to implement security in a microservice architecture. Next, I discuss how to design services that are configurable. I cover a couple of different service configuration mechanisms. After that I talk about how to make your services easier to understand and troubleshoot by using the observability patterns. I end the chapter by showing how to simplify the implementation of these and other concerns by developing your services on top of a microservice chassis framework.

让我们首先看一下安全性。

Let’s first look at security.

11.1. 开发安全服务

11.1. Developing secure services

网络安全已成为每个组织的关键问题。几乎每天都有关于黑客如何的头条新闻 窃取了公司的数据。为了开发安全的软件并远离头条新闻,组织需要解决 各种安全问题,包括硬件的物理安全、传输中和静态数据的加密, 身份验证和授权,以及用于修补软件漏洞的策略。无论如何,这些问题中的大多数都是相同的 无论您使用的是整体式架构还是微服务架构。本节重点介绍微服务架构 影响应用程序级别的安全性。

Cybersecurity has become a critical issue for every organization. Almost every day there are headlines about how hackers have stolen a company’s data. In order to develop secure software and stay out of the headlines, an organization needs to tackle a diverse range of security issues, including physical security of the hardware, encryption of data in transit and at rest, authentication and authorization, and policies for patching software vulnerabilities. Most of these issues are the same regardless of whether you’re using a monolithic or microservice architecture. This section focuses on how the microservice architecture impacts security at the application level.

应用程序开发人员主要负责实现安全性的四个不同方面:

An application developer is primarily responsible for implementing four different aspects of security:

  • 身份验证 - 验证尝试访问应用程序的应用程序或人员(也称为委托人)的身份。例如,应用程序通常会验证委托人的凭证,例如 作为用户 ID 和密码或应用程序的 API 密钥和密钥。
  • AuthenticationVerifying the identity of the application or human (a.k.a. the principal) that’s attempting to access the application. For example, an application typically verifies a principal’s credentials, such as a user ID and password or an application’s API key and secret.
  • 授权 - 验证是否允许委托人对指定数据执行请求的操作。应用程序通常使用 基于角色的安全性和访问控制列表 (ACL) 的组合。基于角色的安全性为每个用户分配一个或多个角色 授予他们调用特定操作的权限。ACL 授予用户或角色对 特定业务对象或聚合。
  • AuthorizationVerifying that the principal is allowed to perform the requested operation on the specified data. Applications often use a combination of role-based security and access control lists (ACLs). Role-based security assigns each user one or more roles that grant them permission to invoke particular operations. ACLs grant users or roles permission to perform an operation on a particular business object, or aggregate.
  • 审计跟踪委托人执行的操作,以检测安全问题、帮助客户支持和实施合规性。
  • AuditingTracking the operations that a principal performs in order to detect security issues, help customer support, and enforce compliance.
  • 安全的进程间通信理想情况下,所有进出服务的通信都应该通过传输层安全性 (TLS) 进行。服务间通信 甚至可能需要使用身份验证。
  • Secure interprocess communicationIdeally, all communication in and out of services should be over Transport Layer Security (TLS). Interservice communication may even need to use authentication.

我在 Section 11.3 中详细描述了审计,并在 Section 11.4.1 中讨论服务网格时谈到了保护服务间通信。本节重点介绍如何实现身份验证和授权。

I describe auditing in detail in section 11.3 and touch on securing interservice communication when discussing service meshes in section 11.4.1. This section focuses on implementing authentication and authorization.

首先,我介绍了如何在 FTGO 整体应用程序中实现安全性。然后,我将用 在微服务架构中实现安全性,以及在整体架构中运作良好的技术如何无法 在微服务架构中使用。之后,我将介绍如何在微服务架构中实现安全性。

I begin by first describing how security is implemented in the FTGO monolith application. I then describe the challenges with implementing security in a microservice architecture and how techniques that work well in a monolithic architecture can’t be used in a microservice architecture. After that I cover how to implement security in a microservice architecture.

让我们首先回顾一下整体式 FTGO 应用程序如何处理安全性。

Let’s start by reviewing how the monolithic FTGO application handles security.

11.1.1. 传统整体式应用程序中的安全性概述

11.1.1. Overview of security in a traditional monolithic application

FTGO 应用程序有几种类型的人类用户,包括消费者、快递员和餐厅工作人员。他们访问 使用基于浏览器的 Web 应用程序和移动应用程序的应用程序。所有 FTGO 用户都必须登录才能访问该应用程序。图 11.1 显示了整体式 FTGO 应用程序的客户端如何进行身份验证和发出请求。

The FTGO application has several kinds of human users, including consumers, couriers, and restaurant staff. They access the application using browser-based web applications and mobile applications. All FTGO users must log in to access the application. Figure 11.1 shows how the clients of the monolithic FTGO application authenticate and make requests.

图 11.1.FTGO 应用程序的客户端首先登录以获取会话令牌,该令牌通常是一个 Cookie。客户端包括 session 令牌。

当用户使用其用户 ID 和密码登录时,客户端会发出包含用户凭证的 POST 请求 FTGO 应用程序。FTGO 应用程序验证凭证并将会话令牌返回给客户端。客户端 在对 FTGO 应用程序的每个后续请求中包括会话令牌。

When a user logs in with their user ID and password, the client makes a POST request containing the user’s credentials to the FTGO application. The FTGO application verifies the credentials and returns a session token to the client. The client includes the session token in each subsequent request to the FTGO application.

图 11.2 显示了 FTGO 应用程序如何实现安全性的高级视图。FTGO 应用程序是用 Java 编写的,并使用 Spring Security 框架,但我将使用适用于其他框架的通用术语来描述设计,例如 作为 NodeJS 的 Passport。

Figure 11.2 shows a high-level view of how the FTGO application implements security. The FTGO application is written in Java and uses the Spring Security framework, but I’ll describe the design using generic terms that are applicable to other frameworks, such as Passport for NodeJS.

图 11.2.当 FTGO 应用程序的客户端发出登录请求时,对用户进行身份验证,初始化会话用户信息,并返回会话令牌 cookie,该 cookie 可以安全地识别 会话。接下来,当客户端发出包含会话令牌的请求时,从指定的会话中检索用户信息并建立安全上下文。请求处理程序(如 )从安全上下文中检索用户信息。Login HandlerSessionBasedSecurityInterceptorOrderDetailsRequestHandler

使用安全框架

正确实施身份验证和授权具有挑战性。最好使用经过验证的安全框架。哪 框架取决于应用程序的技术堆栈。一些流行的框架包括:

Implementing authentication and authorization correctly is challenging. It’s best to use a proven security framework. Which framework to use depends on your application’s technology stack. Some popular frameworks include the following:

安全架构的一个关键部分是会话,它存储主体的 ID 和角色。FTGO 应用程序 是传统的 Java EE 应用程序,因此该会话是内存中会话。会话由会话令牌标识,客户端将会话令牌包含在每个请求中。它通常是不透明的令牌,例如加密的 强随机数。FTGO 应用程序的会话令牌是一个名为 .HttpSessionJSESSIONID

One key part of the security architecture is the session, which stores the principal’s ID and roles. The FTGO application is a traditional Java EE application, so the session is an HttpSession in-memory session. A session is identified by a session token, which the client includes in each request. It’s usually an opaque token such as a cryptographically strong random number. The FTGO application’s session token is an HTTP cookie called JSESSIONID.

安全实现的另一个关键部分是安全上下文,它存储有关发出当前请求的用户的信息。Spring Security 框架使用标准的 Java EE 方法将安全上下文存储在静态的线程局部变量中,该变量是 为处理请求而调用的任何代码都可以轻松访问。请求处理程序可以调用以获取有关当前用户的信息,例如其身份和角色。相比之下,Passport 框架将 将安全上下文作为 .SecurityContextHolder.getContext().getAuthentication()userrequest

The other key part of the security implementation is the security context, which stores information about the user making the current request. The Spring Security framework uses the standard Java EE approach of storing the security context in a static, thread-local variable, which is readily accessible to any code that’s invoked to handle the request. A request handler can call SecurityContextHolder.getContext().getAuthentication() to obtain information about the current user, such as their identity and roles. In contrast, the Passport framework stores the security context as the user attribute of the request.

图 11.2 中显示的事件顺序如下:

The sequence of events shown in Figure 11.2 is as follows:

  1. 客户端向 FTGO 应用程序发出登录请求。
  2. The client makes a login request to the FTGO application.
  3. 登录请求由 处理,它验证凭据、创建会话并在会话中存储有关委托人的信息。LoginHandler
  4. The login request is handled by LoginHandler, which verifies the credentials, creates the session, and stores information about the principal in the session.
  5. Login Handler将会话令牌返回给客户端。
  6. Login Handler returns a session token to the client.
  7. 客户端在调用操作的请求中包含会话令牌。
  8. The client includes the session token in requests that invoke operations.
  9. 这些请求首先由 处理。拦截器通过验证会话令牌来验证每个请求并建立安全上下文。安全性 context 描述 主体 及其角色。SessionBasedSecurityInterceptor
  10. These requests are first processed by SessionBasedSecurityInterceptor. The interceptor authenticates each request by verifying the session token and establishes a security context. The security context describes the principal and its roles.
  11. 请求处理程序使用安全上下文来确定是否允许用户执行请求的操作并获取 他们的身份。
  12. A request handler uses the security context to determine whether to allow a user to perform the requested operation and obtain their identity.

FTGO 应用程序使用基于角色的授权。它定义了对应于不同类型用户的多个角色,包括 、 、 和 。它使用 Spring Security 的声明式安全机制将对 URL 和服务方法的访问限制为特定角色。 角色也交织到业务逻辑中。例如,消费者只能访问他们的订单,而管理员 可以访问所有订单。CONSUMERRESTAURANTCOURIERADMIN

The FTGO application uses role-based authorization. It defines several roles corresponding to the different kinds of users, including CONSUMER, RESTAURANT, COURIER, and ADMIN. It uses Spring Security’s declarative security mechanism to restrict access to URLs and service methods to specific roles. Roles are also interwoven into the business logic. For example, a consumer can only access their orders, whereas an administrator can access all orders.

整体式 FTGO 应用程序使用的安全设计只是实现安全性的一种可能方法。例如,一个 使用内存中会话的缺点是,它要求将特定会话的所有请求路由到同一 application 实例。此要求使负载平衡和操作复杂化。例如,您必须实现一个会话 Draining 机制,该机制等待所有会话过期,然后再关闭应用程序实例。另一种方法 避免这些问题的方法是将 session 存储在数据库中。

The security design used by the monolithic FTGO application is only one possible way to implement security. For example, one drawback of using an in-memory session is that it requires all requests for a particular session to be routed to the same application instance. This requirement complicates load balancing and operations. You must, for example, implement a session draining mechanism that waits for all sessions to expire before shutting down an application instance. An alternative approach, which avoids these problems, is to store the session in a database.

有时可以完全消除服务器端会话。例如,许多应用程序都有 API 客户端,这些客户端提供 每个请求中的凭据,例如 API 密钥和密钥。因此,无需维护服务器端 会期。或者,应用程序可以将会话状态存储在会话令牌中。在本节的后面部分,我将介绍一个 使用 Session Token 存储 Session 状态的方式。但是,让我们先看看实施安全性的挑战 在微服务架构中。

You can sometimes eliminate the server-side session entirely. For example, many applications have API clients that provide their credentials, such as an API key and secret, in every request. As a result, there’s no need to maintain a server-side session. Alternatively, the application can store session state in the session token. Later in this section, I describe one way to use a session token to store the session state. But let’s begin by looking at the challenges of implementing security in a microservice architecture.

11.1.2. 在微服务架构中实现安全性

11.1.2. Implementing security in a microservice architecture

微服务架构是一种分布式架构。每个外部请求都由 API 网关处理,并且至少 一项服务。例如,考虑第 8 章中讨论的查询。API 网关通过调用多个服务来处理此查询,包括 、 和 。每个服务都必须实现某些方面的安全性。例如,必须只允许消费者查看他们的订单,这需要身份验证和授权的组合。挨次 要在微服务架构中实现安全性,我们需要确定谁负责对用户进行身份验证,并且 谁负责授权。getOrderDetails()Order ServiceKitchen ServiceAccounting ServiceOrder Service

A microservice architecture is a distributed architecture. Each external request is handled by the API gateway and at least one service. Consider, for example, the getOrderDetails() query, discussed in chapter 8. The API gateway handles this query by invoking several services, including Order Service, Kitchen Service, and Accounting Service. Each service must implement some aspects of security. For instance, Order Service must only allow a consumer to see their orders, which requires a combination of authentication and authorization. In order to implement security in a microservice architecture we need to determine who is responsible for authenticating the user and who is responsible for authorization.

在微服务应用程序中实现安全性的一个挑战是,我们不能只从整体式中复制设计 应用。这是因为整体式应用程序安全架构的两个方面对于微服务来说是不可能的 建筑:

One challenge with implementing security in a microservices application is that we can’t just copy the design from a monolithic application. That’s because two aspects of the monolithic application’s security architecture are nonstarters for a microservice architecture:

  • 内存中安全上下文使用内存中的安全上下文(如 thread-local)传递用户身份。服务无法共享内存,因此 它们不能使用内存中的安全上下文(例如线程本地)来传递用户身份。在微服务架构中,我们需要一种不同的机制来将用户身份从一个服务传递到 另一个。
  • In-memory security contextUsing an in-memory security context, such as a thread-local, to pass around user identity. Services can’t share memory, so they can’t use an in-memory security context, such as a thread-local, to pass around the user identity. In a microservice architecture, we need a different mechanism for passing user identity from one service to another.
  • 集中式会话因为内存中安全上下文没有意义,所以内存中会话也没有意义。理论上,多种服务 可以访问基于数据库的会话,但会违反松散耦合原则。我们需要一个不同的会话 机制。
  • Centralized sessionBecause an in-memory security context doesn’t make sense, neither does an in-memory session. In theory, multiple services could access a database-based session, except that it would violate the principle of loose coupling. We need a different session mechanism in a microservice architecture.

让我们通过了解如何处理身份验证来开始探索微服务架构中的安全性。

Let’s begin our exploration of security in a microservice architecture by looking at how to handle authentication.

在 API 网关中处理身份验证

有几种不同的方法可以处理身份验证。一个选项是让各个服务对 用户。这种方法的问题在于它允许未经身份验证的请求进入内部网络。它依赖于 在每个开发团队的所有服务中正确实施安全性。因此,存在重大风险 包含安全漏洞的应用程序。

There are a couple of different ways to handle authentication. One option is for the individual services to authenticate the user. The problem with this approach is that it permits unauthenticated requests to enter the internal network. It relies on every development team correctly implementing security in all of their services. As a result, there’s a significant risk of an application containing security vulnerabilities.

在服务中实现身份验证的另一个问题是,不同的客户端以不同的方式进行身份验证。 纯 API 客户端使用基本身份验证等方式为每个请求提供凭据。其他客户可能首先 登录,然后为每个请求提供会话令牌。我们希望避免要求服务处理各种身份验证 机制。

Another problem with implementing authentication in the services is that different clients authenticate in different ways. Pure API clients supply credentials with each request using, for example, basic authentication. Other clients might first log in and then supply a session token with each request. We want to avoid requiring services to handle a diverse set of authentication mechanisms.

更好的方法是让 API 网关在将请求转发到服务之前对其进行身份验证。集中化 API API 网关中的身份验证的优点是只有一个地方可以正确操作。因此,有很多 出现安全漏洞的可能性较小。另一个好处是,只有 API 网关必须处理各种不同的 身份验证机制。它对服务隐藏了这种复杂性。

A better approach is for the API gateway to authenticate a request before forwarding it to the services. Centralizing API authentication in the API gateway has the advantage that there’s only one place to get right. As a result, there’s a much smaller chance of a security vulnerability. Another benefit is that only the API gateway has to deal with the various different authentication mechanisms. It hides this complexity from the services.

图 11.3 显示了这种方法的工作原理。客户端使用 API 网关进行身份验证。API 客户端在每个请求中包含凭证。 基于登录名的客户端将用户的凭证用于 API Gateway 的身份验证,并接收会话令牌。API 网关通过身份验证后 请求时,它会调用一个或多个服务。POST

Figure 11.3 shows how this approach works. Clients authenticate with the API gateway. API clients include credentials in each request. Login-based clients POST the user’s credentials to the API gateway’s authentication and receive a session token. Once the API gateway has authenticated a request, it invokes one or more services.

模式:访问令牌

API 网关将包含用户信息(例如其身份和角色)的令牌传递给服务 它调用。请参阅 http://microservices.io/patterns/security/access-token.html

The API gateway passes a token containing information about the user, such as their identity and their roles, to the services that it invokes. See http://microservices.io/patterns/security/access-token.html.

图 11.3.API 网关对来自客户端的请求进行身份验证,并在它向服务发出的请求中包含安全令牌。这 服务使用令牌获取有关委托人的信息。API 网关还可以将安全令牌用作会话 令 牌。

API 网关调用的服务需要知道发出请求的委托人。它还必须验证请求 已经过身份验证。解决方案是让 API 网关在每个服务请求中包含一个令牌。该服务使用 token 验证请求并获取有关委托人的信息。API 网关也可能将相同的令牌提供给 用作会话令牌的面向会话的客户端。

A service invoked by the API gateway needs to know the principal making the request. It must also verify that the request has been authenticated. The solution is for the API gateway to include a token in each service request. The service uses the token to validate the request and obtain information about the principal. The API gateway might also give the same token to session-oriented clients to use as the session token.

API 客户端的事件顺序如下:

The sequence of events for API clients is as follows:

  1. 客户端发出包含凭证的请求。
  2. A client makes a request containing credentials.
  3. API 网关对凭证进行身份验证,创建安全令牌,并将其传递给一个或多个服务。
  4. The API gateway authenticates the credentials, creates a security token, and passes that to the service or services.

基于登录的客户端的事件顺序如下:

The sequence of events for login-based clients is as follows:

  1. 客户端发出包含凭证的登录请求。
  2. A client makes a login request containing credentials.
  3. API 网关返回安全令牌。
  4. The API gateway returns a security token.
  5. 客户端在调用操作的请求中包含安全令牌。
  6. The client includes the security token in requests that invoke operations.
  7. API 网关验证安全令牌并将其转发给一个或多个服务。
  8. The API gateway validates the security token and forwards it to the service or services.

在本章的后面部分,我将介绍如何实现令牌,但让我们首先看一下安全性的另一个主要方面: 授权。

A little later in this chapter, I describe how to implement tokens, but let’s first look at the other main aspect of security: authorization.

处理授权

验证客户端的凭证很重要,但还不够。应用程序还必须实现授权机制 验证是否允许客户端执行请求的操作。例如,在 FTGO 应用程序中,查询只能由放置 (基于实例的安全性示例) 的使用者和帮助使用者的客户服务代理调用。getOrderDetails()Order

Authenticating a client’s credentials is important but insufficient. An application must also implement an authorization mechanism that verifies that the client is allowed to perform the requested operation. For example, in the FTGO application the getOrderDetails() query can only be invoked by the consumer who placed the Order (an example of instance-based security) and a customer service agent who is helping the consumer.

实施授权的一个位置是 API 网关。例如,它可以将访问权限限制为仅作为消费者和客户服务代理的用户。如果用户无权访问特定路径,则 API Gateway 可以在将请求转发到服务之前拒绝该请求。与身份验证一样,集中授权 降低了安全漏洞的风险。您可以使用 API 网关 安全框架,例如 Spring Security。GET /orders/{orderId}

One place to implement authorization is the API gateway. It can, for example, restrict access to GET /orders/{orderId} to only users who are consumers and customer service agents. If a user isn’t allowed to access a particular path, the API gateway can reject the request before forwarding it on to the service. As with authentication, centralizing authorization within the API gateway reduces the risk of security vulnerabilities. You can implement authorization in the API gateway using a security framework, such as Spring Security.

在 API 网关中实现授权的一个缺点是,它有将 API 网关耦合到服务的风险,需要 它们将同步更新。此外,API 网关通常只能实现对 URL 路径的基于角色的访问。 API 网关实现控制对单个域对象的访问的 ACL 通常是不切实际的,因为 这需要对服务的域逻辑有详细的了解。

One drawback of implementing authorization in the API gateway is that it risks coupling the API gateway to the services, requiring them to be updated in lockstep. What’s more, the API gateway can typically only implement role-based access to URL paths. It’s generally not practical for the API gateway to implement ACLs that control access to individual domain objects, because that requires detailed knowledge of a service’s domain logic.

实现授权的另一个位置是在服务中。服务可以对 URL 和 对于服务方法。它还可以实施 ACL 来管理对聚合的访问。 例如,可以实现基于角色和基于 ACL 的授权机制来控制对订单的访问。其他 FTGO 应用程序中的服务实现了类似的授权逻辑。Order Service

The other place to implement authorization is in the services. A service can implement role-based authorization for URLs and for service methods. It can also implement ACLs to manage access to aggregates. Order Service can, for example, implement the role-based and ACL-based authorization mechanism for controlling access to orders. Other services in the FTGO application implement similar authorization logic.

使用 JWT 传递用户身份和角色

在微服务架构中实施安全性时,您需要确定 API 网关应使用哪种类型的令牌 将用户信息传递给服务。有两种类型的代币可供选择。一种选择是使用不透明令牌,通常是 UUID。不透明令牌的缺点是它们会降低性能和可用性,并增加 延迟。这是因为此类令牌的接收者必须对安全服务进行同步 RPC 调用,以验证 令牌并检索用户信息。

When implementing security in a microservice architecture, you need to decide which type of token an API gateway should use to pass user information to the services. There are two types of tokens to choose from. One option is to use opaque tokens, which are typically UUIDs. The downside of opaque tokens is that they reduce performance and availability and increase latency. That’s because the recipient of such a token must make a synchronous RPC call to a security service to validate the token and retrieve the user information.

另一种无需调用安全服务的方法是使用包含用户相关信息的透明令牌。透明令牌的一种流行标准是 JSON Web 令牌 (JWT)。 JWT 是在两方之间安全地表示声明(例如用户身份和角色)的标准方法。JWT 具有有效负载 这是一个 JSON 对象,其中包含有关用户的信息(例如其身份和角色)以及其他元数据(例如 作为到期日期。它使用只有 JWT 的创建者知道的密钥(例如 API 网关和 JWT 的接收方,例如服务。该密钥可确保恶意第三方无法伪造或篡改 JWT 的

An alternative approach, which eliminates the call to the security service, is to use a transparent token containing information about the user. One such popular standard for transparent tokens is the JSON Web Token (JWT). JWT is standard way to securely represent claims, such as user identity and roles, between two parties. A JWT has a payload, which is a JSON object that contains information about the user, such as their identity and roles, and other metadata, such as an expiration date. It’s signed with a secret that’s only known to the creator of the JWT, such as the API gateway and the recipient of the JWT, such as a service. The secret ensures that a malicious third party can’t forge or tamper with a JWT.

JWT 的一个问题是,由于令牌是自包含的,因此它是不可撤销的。根据设计,服务将执行请求 操作。因此,没有实用的方法可以撤销个人 JWT 的解决方案是颁发过期时间较短的 JWT, 因为这限制了恶意方可以做的事情。但是,短期 JWT 的一个缺点是应用程序必须 以某种方式不断重新发出 JWT 以保持会话处于活动状态。幸运的是,这是解决的众多协议之一 通过调用 OAuth 2.0 的安全标准。让我们看看它是如何运作的。

One issue with JWT is that because a token is self-contained, it’s irrevocable. By design, a service will perform the request operation after verifying the JWT’s signature and expiration date. As a result, there’s no practical way to revoke an individual JWT that has fallen into the hands of a malicious third party. The solution is to issue JWTs with short expiration times, because that limits what a malicious party could do. One drawback of short-lived JWTs, though, is that the application must somehow continually reissue JWTs to keep the session active. Fortunately, this is one of the many protocols that are solved by a security standard calling OAuth 2.0. Let’s look at how that works.

在微服务架构中使用 OAuth 2.0

假设您要为 FTGO 应用程序实施一个,该应用程序管理包含用户信息(例如凭证和角色)的用户数据库。The API 网关调用 以验证客户端请求并获取 JWT。您可以设计一个 API 并使用您最喜欢的 Web 框架实现它。但这是 FTGO 特有的通用功能 应用程序 — 开发此类服务不会有效利用开发资源。User ServiceUser ServiceUser Service

Let’s say you want to implement a User Service for the FTGO application that manages a user database containing user information, such as credentials and roles. The API gateway calls the User Service to authenticate a client request and obtain a JWT. You could design a User Service API and implement it using your favorite web framework. But that’s generic functionality that isn’t specific to the FTGO application—developing such a service wouldn’t be an efficient use of development resources.

幸运的是,您不需要开发这种安全基础设施。您可以使用现成的服务或框架 它实现了一个名为 OAuth 2.0 的标准。OAuth 2.0 是一种授权协议,最初旨在启用 公共云服务(如 GitHub 或 Google)的用户授予第三方应用程序访问其信息的权限,而无需 显示其密码。例如,OAuth 2.0 是一种机制,可让您安全地向第三方授予基于云的 持续集成 (CI) 服务对 GitHub 存储库的访问权限。

Fortunately, you don’t need to develop this kind of security infrastructure. You can use an off-the-shelf service or framework that implements a standard called OAuth 2.0. OAuth 2.0 is an authorization protocol that was originally designed to enable a user of a public cloud service, such as GitHub or Google, to grant a third-party application access to its information without revealing its password. For example, OAuth 2.0 is the mechanism that enables you to securely grant a third party cloud-based Continuous Integration (CI) service access to your GitHub repository.

尽管 OAuth 2.0 最初的重点是授权访问公有云服务,但您也可以将其用于身份验证 和应用程序中的授权。让我们快速了解一下微服务架构如何使用 OAuth 2.0。

Although the original focus of OAuth 2.0 was authorizing access to public cloud services, you can also use it for authentication and authorization in your application. Let’s take a quick look at how a microservice architecture might use OAuth 2.0.

关于 OAuth 2.0

OAuth 2.0 是一个复杂的主题。在本章中,我只能提供一个简短的概述,并介绍如何在微服务中使用它 建筑。有关 OAuth 2.0 的更多信息,请查看 Aaron Parecki (www.oauth.com) 的在线书籍 OAuth 2.0 服务器Spring Microservices in Action (Manning, 2017) 的第 7 章也涵盖了这个主题 (https://livebook.manning.com/#!/book/spring-microservices-in-action/chapter-7/)。

OAuth 2.0 is a complex topic. In this chapter, I can only provide a brief overview and describe how it can be used in a microservice architecture. For more information on OAuth 2.0, check out the online book OAuth 2.0 Servers by Aaron Parecki (www.oauth.com). Chapter 7 of Spring Microservices in Action (Manning, 2017) also covers this topic (https://livebook.manning.com/#!/book/spring-microservices-in-action/chapter-7/).

OAuth 2.0 中的关键概念如下:

The key concepts in OAuth 2.0 are the following:

  • 授权服务器提供用于对用户进行身份验证并获取访问令牌和刷新令牌的 API。Spring OAuth 就是一个很好的例子 用于构建 OAuth 2.0 授权服务器的框架。
  • Authorization ServerProvides an API for authenticating users and obtaining an access token and a refresh token. Spring OAuth is a great example of a framework for building an OAuth 2.0 authorization server.
  • Access Token (访问令牌) - 授予对 .访问令牌的格式取决于实现。但是一些实现,例如 Spring OAuth,使用 JWT。Resource Server
  • Access TokenA token that grants access to a Resource Server. The format of the access token is implementation dependent. But some implementations, such as Spring OAuth, use JWTs.
  • 刷新令牌一种长期存在但可撤销的令牌,用于获取新的 .ClientAccessToken
  • Refresh TokenA long-lived yet revocable token that a Client uses to obtain a new AccessToken.
  • 资源服务器使用访问令牌授权访问的服务。在微服务架构中,服务是资源服务器。
  • Resource ServerA service that uses an access token to authorize access. In a microservice architecture, the services are resource servers.
  • 客户想要访问 .在微服务架构中,是 OAuth 2.0 客户端。Resource ServerAPI Gateway
  • ClientA client that wants to access a Resource Server. In a microservice architecture, API Gateway is the OAuth 2.0 client.

在本节的后面,我将介绍如何支持基于登录的客户端。但首先,我们来谈谈如何对 API 进行身份验证 客户。

Later in this section, I describe how to support login-based clients. But first, let’s talk about how to authenticate API clients.

图 11.4 显示了 API 网关如何对来自 API 客户端的请求进行身份验证。API 网关通过执行 API 客户端 对 OAuth 2.0 授权服务器的请求,该请求返回访问令牌。然后,API 网关发出一个或多个请求 包含服务的访问令牌。

Figure 11.4 shows how the API gateway authenticates a request from an API client. The API gateway authenticate the API client by making a request to the OAuth 2.0 authorization server, which returns an access token. The API gateway then makes one or more requests containing the access token to the services.

图 11.4.API 网关通过向 OAuth 2.0 身份验证服务器发出密码授予请求来对 API 客户端进行身份验证。这 server 返回一个访问令牌,API 网关将该令牌传递给服务。服务验证令牌的签名,并且 提取有关用户的信息,包括其身份和角色。

图 11.4 中所示的事件顺序如下:

The sequence of events shown in figure 11.4 is as follows:

  1. 客户端发出请求,使用基本身份验证提供其凭证。
  2. The client makes a request, supplying its credentials using basic authentication.
  3. API 网关向 OAuth 2.0 身份验证服务器发出 OAuth 2.0 密码授予请求 (www.oauth.com/oauth2-servers/access-tokens/password-grant/)。
  4. The API gateway makes an OAuth 2.0 Password Grant request (www.oauth.com/oauth2-servers/access-tokens/password-grant/) to the OAuth 2.0 authentication server.
  5. 身份验证服务器验证 API 客户端的凭证并返回访问令牌和刷新令牌。
  6. The authentication server validates the API client’s credentials and returns an access token and a refresh token.
  7. API 网关在向服务发出的请求中包含访问令牌。服务验证访问令牌和 使用它来授权请求。
  8. The API gateway includes the access token in the requests it makes to the services. A service validates the access token and uses it to authorize the request.

基于 OAuth 2.0 的 API 网关可以通过使用 OAuth 2.0 访问令牌作为会话令牌来验证面向会话的客户端。 此外,当 access token 过期时,它可以使用 refresh token 获取新的 access Token。图 11.5 显示了 API 网关如何使用 OAuth 2.0 来处理面向会话的客户端。API 客户端通过 POST 启动 其凭证添加到 API Gateway 的终端节点。API 网关将访问令牌和刷新令牌返回给客户端。然后,API 客户端提供这两个令牌 当它向 API 网关发出请求时。/login

An OAuth 2.0-based API gateway can authenticate session-oriented clients by using an OAuth 2.0 access token as a session token. What’s more, when the access token expires, it can obtain a new access token using the refresh token. Figure 11.5 shows how an API gateway can use OAuth 2.0 to handle session-oriented clients. An API client initiates a session by POSTing its credentials to the API gateway’s /login endpoint. The API gateway returns an access token and a refresh token to the client. The API client then supplies both tokens when it makes requests to the API gateway.

图 11.5.客户端通过将其凭证 POST 到 API 网关来登录。API 网关使用 OAuth 对凭证进行身份验证 2.0 身份验证服务器,并将 Access Token 和 Refresh Token 作为 Cookie 返回。客户端将这些令牌包含在 它向 API 网关发出的请求。

事件顺序如下:

The sequence of events is as follows:

  1. 基于登录名的客户端将其凭证 POST 到 API 网关。
  2. The login-based client POSTs its credentials to the API gateway.
  3. API 网关向 OAuth 2.0 身份验证服务器发出 OAuth 2.0 密码授予请求 (www.oauth.com/oauth2-servers/access-tokens/password-grant/)。Login Handler
  4. The API gateway’s Login Handler makes an OAuth 2.0 Password Grant request (www.oauth.com/oauth2-servers/access-tokens/password-grant/) to the OAuth 2.0 authentication server.
  5. 身份验证服务器验证客户端的凭证并返回访问令牌和刷新令牌。
  6. The authentication server validates the client’s credentials and returns an access token and a refresh token.
  7. API Gateway 将访问令牌和刷新令牌返回给客户端,例如,以 Cookie 的形式返回。
  8. The API gateway returns the access and refresh tokens to the client—as cookies, for example.
  9. 客户端在向 API 网关发出的请求中包含访问令牌和刷新令牌。
  10. The client includes the access and refresh tokens in requests it makes to the API gateway.
  11. API 网关验证访问令牌,并将其包含在它向服务发出的请求中。Session Authentication Interceptor
  12. The API gateway’s Session Authentication Interceptor validates the access token and includes it in requests it makes to the services.

如果访问令牌已过期或即将过期,API 网关将通过进行 OAuth 2.0 刷新来获取新的访问令牌 将包含刷新令牌的请求 (www.oauth.com/oauth2-servers/access-tokens/refreshing-access-tokens/) 授予授权服务器。如果刷新令牌尚未过期或被吊销,则 Authorization Server 返回新的访问令牌。 将新的访问令牌传递给服务并将其返回给客户端。API Gateway

If the access token has expired or is about to expire, the API gateway obtains a new access token by making an OAuth 2.0 Refresh Grant request (www.oauth.com/oauth2-servers/access-tokens/refreshing-access-tokens/), which contains the refresh token, to the authorization server. If the refresh token hasn’t expired or been revoked, the authorization server returns a new access token. API Gateway passes the new access token to the services and returns it to the client.

使用 OAuth 2.0 的一个重要好处是它是经过验证的安全标准。使用现成的 OAuth 2.0 意味着您不必浪费时间重新发明轮子,也不必冒着开发不安全设计的风险。但 OAuth 2.0 并不是唯一的 在微服务架构中实现安全性的方法。无论您使用哪种方法,三个关键思想都是 遵循:Authentication Server

An important benefit of using OAuth 2.0 is that it’s a proven security standard. Using an off-the-shelf OAuth 2.0 Authentication Server means you don’t have to waste time reinventing the wheel or risk developing an insecure design. But OAuth 2.0 isn’t the only way to implement security in a microservice architecture. Regardless of which approach you use, the three key ideas are as follows:

  • API 网关负责对客户端进行身份验证。
  • The API gateway is responsible for authenticating clients.
  • API 网关和服务使用透明令牌(如 JWT)来传递有关委托人的信息。
  • The API gateway and the services use a transparent token, such as a JWT, to pass around information about the principal.
  • 服务使用令牌获取委托人的身份和角色。
  • A service uses the token to obtain the principal’s identity and roles.

现在我们已经了解了如何确保服务安全,让我们看看如何使它们可配置。

Now that we’ve looked at how to make services secure, let’s see how to make them configurable.

11.2. 设计可配置的服务

11.2. Designing configurable services

假设您负责 。如图 11.6 所示,该服务使用来自 Apache Kafka 的事件并读取和写入 AWS DynamoDB 表项目。为了这项服务 要运行,它需要各种配置属性,包括 Apache Kafka 的网络位置和凭证,以及 网络位置。Order History Service

Imagine that you’re responsible for Order History Service. As figure 11.6 shows, the service consumes events from Apache Kafka and reads and writes AWS DynamoDB table items. In order for this service to run, it needs various configuration properties, including the network location of Apache Kafka and the credentials and network location for AWS DynamoDB.

图 11.6.使用 Apache Kafka 和 AWS DynamoDB。它需要配置每个服务的网络位置、凭证等。Order History Service

这些配置属性的值取决于服务在哪个环境中运行。例如,开发者 生产环境将使用不同的 Apache Kafka 代理和不同的 AWS 凭证。这没有意义 将特定环境的 configuration 属性值硬连接到 Deployable Service,因为这需要 要为每个环境重新构建它。相反,服务应该由部署管道构建一次,并部署到 多个环境。

The values of these configuration properties depend on which environment the service is running in. For example, the developer and production environments will use different Apache Kafka brokers and different AWS credentials. It doesn’t make sense to hard-wire a particular environment’s configuration property values into the deployable service because that would require it to be rebuilt for each environment. Instead, a service should be built once by the deployment pipeline and deployed into multiple environments.

将不同的配置属性集硬连接到源代码中并使用例如, Spring 框架的配置文件机制在运行时选择适当的集合。这是因为这样做会引入安全漏洞并限制其范围 可以部署。此外,敏感数据(如凭证)应使用 secrets 存储机制进行安全存储。 例如 Hashicorp Vault (www.vaultproject.io) 或 AWS Parameter Store (https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html)。相反,您应该在运行时使用 Externalized 配置模式。

Nor does it make sense to hard-wire different sets of configuration properties into the source code and use, for example, the Spring Framework’s profile mechanism to select the appropriate set at runtime. That’s because doing so would introduce a security vulnerability and limit where it can be deployed. Additionally, sensitive data such as credentials should be stored securely using a secrets storage mechanism, such as Hashicorp Vault (www.vaultproject.io) or AWS Parameter Store (https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html). Instead, you should supply the appropriate configuration properties to the service at runtime by using the Externalized configuration pattern.

模式:外部化配置

在运行时向服务提供配置属性值,例如数据库凭证和网络位置。请参阅 http://microservices.io/patterns/externalized-configuration.html

Supply configuration property values, such as database credentials and network location, to a service at runtime. See http://microservices.io/patterns/externalized-configuration.html.

外部化配置机制在运行时向服务实例提供配置属性值。那里 是两种主要方法:

An externalized configuration mechanism provides the configuration property values to a service instance at runtime. There are two main approaches:

  • 推送模型部署基础架构使用操作系统等方式将配置属性传递给服务实例 环境变量或配置文件。
  • Push modelThe deployment infrastructure passes the configuration properties to the service instance using, for example, operating system environment variables or a configuration file.
  • 拉取模型 - 服务实例从配置服务器读取其配置属性。
  • Pull modelThe service instance reads its configuration properties from a configuration server.

我们将从推送模型开始研究每种方法。

We’ll look at each approach, starting with the push model.

11.2.1. 使用基于推送的外部化配置

11.2.1. Using push-based externalized configuration

推送模型依赖于部署环境和服务的协作。部署环境提供 Configuration 属性。如图 11.7 所示,它可以将配置属性作为环境变量传递。或者,部署环境可以提供 配置属性。然后,服务实例在执行 启动。

The push model relies on the collaboration of the deployment environment and the service. The deployment environment supplies the configuration properties when it creates a service instance. It might, as figure 11.7 shows, pass the configuration properties as environment variables. Alternatively, the deployment environment may supply the configuration properties using a configuration file. The service instance then reads the configuration properties when it starts up.

图 11.7.当部署基础结构创建 的实例时,它会设置包含外部化配置的环境变量。 读取这些环境变量。Order History ServiceOrder History Service

部署环境和服务必须就如何提供配置属性达成一致。精确的机制 取决于具体的部署环境。例如,第 12 章介绍了如何指定 Docker 容器的环境变量。

The deployment environment and the service must agree on how the configuration properties are supplied. The precise mechanism depends on the specific deployment environment. For example, chapter 12 describes how you can specify the environment variables of a Docker container.

假设您决定使用环境变量提供外部化的配置属性值。您的应用程序 可以调用以获取它们的值。但是,如果您是 Java 开发人员,那么您使用的框架可能会提供更方便的 机制。FTGO 服务是使用 Spring Boot 构建的,Spring Boot 具有极其灵活的外部化配置机制 ,从具有明确定义的优先规则 (https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html) 的各种源检索配置属性。让我们看看它是如何工作的。System.getenv()

Let’s imagine that you’ve decided to supply externalized configuration property values using environment variables. Your application could call System.getenv() to obtain their values. But if you’re a Java developer, it’s likely that you’re using a framework that provides a more convenient mechanism. The FTGO services are built using Spring Boot, which has an extremely flexible externalized configuration mechanism that retrieves configuration properties from a variety of sources with well-defined precedence rules (https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html). Let’s look at how it works.

Spring Boot 从各种来源读取 properties。我发现以下来源在微服务架构中很有用:

Spring Boot reads properties from a variety of sources. I find the following sources useful in a microservice architecture:

  1. 命令行参数
  2. Command-line arguments
  3. SPRING_APPLICATION_JSON、包含 JSON 的操作系统环境变量或 JVM 系统属性
  4. SPRING_APPLICATION_JSON, an operating system environment variable or JVM system property that contains JSON
  5. JVM 系统属性
  6. JVM System properties
  7. 操作系统环境变量
  8. Operating system environment variables
  9. 当前目录中的配置文件
  10. A configuration file in the current directory

此列表中前面的源中的特定属性值将覆盖此列表中后面的源中的相同属性。 例如,操作系统环境变量会覆盖从配置文件中读取的属性。

A particular property value from a source earlier in this list overrides the same property from a source later in this list. For example, operating system environment variables override properties read from a configuration file.

Spring Boot 使这些属性可用于 Spring Framework 的 .例如,服务可以使用 annotation 获取属性的值:ApplicationContext@Value

Spring Boot makes these properties available to the Spring Framework’s ApplicationContext. A service can, for example, obtain the value of a property using the @Value annotation:

public class OrderHistoryDynamoDBConfiguration {

  @Value("${aws.region}")
  private String awsRegion;
public class OrderHistoryDynamoDBConfiguration {

  @Value("${aws.region}")
  private String awsRegion;

Spring Framework 将字段初始化为属性的值。此属性从前面列出的源之一(如配置文件)或环境变量中读取。awsRegionaws.regionAWS_REGION

The Spring Framework initializes the awsRegion field to the value of the aws.region property. This property is read from one of the sources listed earlier, such as a configuration file or from the AWS_REGION environment variable.

推送模型是一种有效且广泛使用的配置服务机制。然而,一个限制是重新配置 如果不是不可能的话,正在运行的服务可能具有挑战性。部署基础架构可能不允许您更改 正在运行的服务的外部化配置,而无需重新启动它。例如,您不能更改环境变量 正在运行的进程。另一个限制是存在 configuration property 值分散在整个 众多服务的定义。因此,您可能需要考虑使用基于拉取的模型。让我们看看它是如何的 工程。

The push model is an effective and widely used mechanism for configuring a service. One limitation, however, is that reconfiguring a running service might be challenging, if not impossible. The deployment infrastructure might not allow you to change the externalized configuration of a running service without restarting it. You can’t, for example, change the environment variables of a running process. Another limitation is that there’s a risk of the configuration property values being scattered throughout the definition of numerous services. As a result, you may want to consider using a pull-based model. Let’s look at how it works.

11.2.2. 使用基于拉取的外部化配置

11.2.2. Using pull-based externalized configuration

在 pull 模型中,服务实例从配置服务器读取其配置属性。图 11.8 显示了它是如何工作的。启动时,服务实例会向配置服务查询其配置。配置 用于访问配置服务器的属性(例如其网络位置)通过 基于推送的配置机制,例如环境变量。

In the pull model, a service instance reads its configuration properties from a configuration server. Figure 11.8 shows how it works. On startup, a service instance queries the configuration service for its configuration. The configuration properties for accessing the configuration server, such as its network location, are provided to the service instance via a push-based configuration mechanism, such as environment variables.

图 11.8.启动时,服务实例从配置服务器检索其配置属性。部署基础设施 提供用于访问配置服务器的配置属性。

有多种方法可以实现配置服务器,包括:

There are a variety of ways to implement a configuration server, including the following:

  • 版本控制系统,如 Git
  • Version control system such as Git
  • SQL 和 NoSQL 数据库
  • SQL and NoSQL databases
  • 专用配置服务器,例如 Spring Cloud Config Server、Hashicorp Vault,这是敏感数据的存储 例如凭证和 AWS Parameter Store
  • Specialized configuration servers, such as Spring Cloud Config Server, Hashicorp Vault, which is a store for sensitive data such as credentials, and AWS Parameter Store

Spring Cloud Config 项目是基于配置服务器的框架的一个很好的示例。它由一个服务器和一个 客户。服务器支持各种用于存储配置属性的后端,包括版本控制系统、 数据库和 Hashicorp Vault 的数据库。客户端从服务器检索配置属性并将其注入 Spring 中。ApplicationContext

The Spring Cloud Config project is a good example of a configuration server-based framework. It consists of a server and a client. The server supports a variety of backends for storing configuration properties, including version control systems, databases, and Hashicorp Vault. The client retrieves configuration properties from the server and injects them into the Spring ApplicationContext.

使用配置服务器有几个好处:

Using a configuration server has several benefits:

  • 集中配置所有配置属性都存储在一个位置,这使得它们更易于管理。更重要的是,为了消除 重复的配置属性,一些实现允许你定义全局默认值,这些默认值可以在每个服务上覆盖 基础。
  • Centralized configurationAll the configuration properties are stored in one place, which makes them easier to manage. What’s more, in order to eliminate duplicate configuration properties, some implementations let you define global defaults, which can be overridden on a per-service basis.
  • 敏感数据的透明解密加密敏感数据(如数据库凭证)是一种安全最佳实践。不过,使用加密的一个挑战是 是通常 Service 实例需要解密它们,这意味着它需要加密密钥。某配置服务器 实现会在将属性返回给服务之前自动解密属性。
  • Transparent decryption of sensitive dataEncrypting sensitive data such as database credentials is a security best practice. One challenge of using encryption, though, is that usually the service instance needs to decrypt them, which means it needs the encryption keys. Some configuration server implementations automatically decrypt properties before returning them to the service.
  • 动态重配置服务可能会通过轮询等方式检测更新的属性值,并重新配置自身。
  • Dynamic reconfigurationA service could potentially detect updated property values by, for example, polling, and reconfigure itself.

使用配置服务器的主要缺点是,除非它由基础架构提供,否则它是另一个 需要设置和维护的基础设施。幸运的是,有各种开源框架,例如 作为 Spring Cloud Config,这使得运行配置服务器变得更加容易。

The primary drawback of using a configuration server is that unless it’s provided by the infrastructure, it’s yet another piece of infrastructure that needs to be set up and maintained. Fortunately, there are various open source frameworks, such as Spring Cloud Config, which make it easier to run a configuration server.

现在我们已经了解了如何设计可配置的服务,让我们谈谈如何设计可观察的服务。

Now that we’ve looked at how to design configurable services, let’s talk about how to design observable services.

11.3. 设计 observable 服务

11.3. Designing observable services

假设您已将 FTGO 应用程序部署到生产环境中。您可能想知道应用程序在做什么:请求 per second、resource utilization 等。如果出现问题,例如服务实例失败或磁盘已满,您还需要收到警报 - 最好是在 它会影响用户。而且,如果出现问题,您需要能够排除故障并确定根本原因。

Let’s say you’ve deployed the FTGO application into production. You probably want to know what the application is doing: requests per second, resource utilization, and so on. You also need to be alerted if there’s a problem, such as a failed service instance or a disk filling up—ideally before it impacts a user. And, if there’s a problem, you need to be able to troubleshoot and identify the root cause.

在生产环境中管理应用程序的许多方面都超出了开发人员的范围,例如监控硬件 可用性和利用率。这些显然是运营的责任。但是有几种模式,你, 作为服务开发人员,必须实施以使您的服务更易于管理和故障排除。这些模式(如图 11.9 所示)公开了服务实例的行为和运行状况。它们使监控系统能够跟踪和可视化服务的状态 并在出现问题时生成警报。这些模式还使问题故障排除变得更加容易。

Many aspects of managing an application in production are outside the scope of the developer, such as monitoring hardware availability and utilization. These are clearly the responsibility of operations. But there are several patterns that you, as a service developer, must implement to make your service easier to manage and troubleshoot. These patterns, shown in figure 11.9, expose a service instance’s behavior and health. They enable a monitoring system to track and visualize the state of a service and generate alerts when there’s a problem. These patterns also make troubleshooting problems easier.

图 11.9.可观测性模式使开发人员和运维人员能够了解应用程序的行为并进行故障排除 问题。开发人员负责确保其服务是可观察的。运营负责基础设施 收集服务公开的信息。

您可以使用以下模式来设计可观察服务:

You can use the following patterns to design observable services:

  • 运行状况检查 API公开返回服务运行状况的终端节点。
  • Health check APIExpose an endpoint that returns the health of the service.
  • 日志聚合记录服务活动并将日志写入集中式日志记录服务器,该服务器提供搜索和警报。
  • Log aggregationLog service activity and write logs into a centralized logging server, which provides searching and alerting.
  • 分布式跟踪为每个外部请求分配一个唯一的 ID,并在请求在服务之间流动时对其进行跟踪。
  • Distributed tracingAssign each external request a unique ID and trace requests as they flow between services.
  • 异常跟踪 - 向异常跟踪服务报告异常,该服务会删除重复的异常、提醒开发人员并跟踪解决方法 每个异常。
  • Exception trackingReport exceptions to an exception tracking service, which de-duplicates exceptions, alerts developers, and tracks the resolution of each exception.
  • 应用程序指标服务维护指标 (如计数器和仪表),并将它们公开给指标服务器。
  • Application metricsServices maintain metrics, such as counters and gauges, and expose them to a metrics server.
  • 审计日志记录 - 记录用户操作。
  • Audit loggingLog user actions.

大多数这些模式的一个显著特征是每个模式都有一个 developer 组件和一个 operations 组件。 例如,考虑运行状况检查 API 模式。开发人员负责确保其服务实现 运行状况检查终端节点。Operations 负责定期调用健康检查 API 的监控系统。 同样,对于 Log 聚合模式,开发人员负责确保其服务记录有用的信息。 而 operations 负责日志聚合。

A distinctive feature of most of these patterns is that each pattern has a developer component and an operations component. Consider, for example, the Health check API pattern. The developer is responsible for ensuring that their service implements a health check endpoint. Operations is responsible for the monitoring system that periodically invokes the health check API. Similarly, for the Log aggregation pattern, a developer is responsible for ensuring that their services log useful information, whereas operations is responsible for log aggregation.

让我们看一下这些模式中的每一个,从 Health check API 模式开始。

Let’s take a look at each of these patterns, starting with the Health check API pattern.

11.3.1. 使用健康检查 API 模式

11.3.1. Using the Health check API pattern

有时,服务可能正在运行,但无法处理请求。例如,新启动的服务实例可能不是 准备接受请求。例如,FTGO 大约需要 10 秒来初始化消息传递和数据库适配器。这对部署毫无意义 基础结构将 HTTP 请求路由到服务实例,直到它准备好处理它们。Consumer Service

Sometimes a service may be running but unable to handle requests. For instance, a newly started service instance may not be ready to accept requests. The FTGO Consumer Service, for example, takes around 10 seconds to initialize the messaging and database adapters. It would be pointless for the deployment infrastructure to route HTTP requests to a service instance until it’s ready to process them.

此外,服务实例可能会在不终止的情况下失败。例如,错误可能会导致 的实例耗尽数据库连接,并且无法访问数据库。部署基础结构不应路由请求 拖动到已失败但仍在运行的服务实例。如果服务实例未恢复,则部署 infrastructure 必须终止它并创建一个新实例。Consumer Service

Also, a service instance can fail without terminating. For example, a bug might cause an instance of Consumer Service to run out of database connections and be unable to access the database. The deployment infrastructure shouldn’t route requests to a service instance that has failed yet is still running. And, if the service instance does not recover, the deployment infrastructure must terminate it and create a new instance.

模式:运行状况检查 API

服务公开运行状况检查 API 终端节点,例如 ,它返回服务的运行状况。请参阅 http://microservices.io/patterns/observability/healthcheck-api.htmlGET /health

A service exposes a health check API endpoint, such as GET /health, which returns the health of the service. See http://microservices.io/patterns/observability/healthcheck-api.html.

服务实例需要能够告诉部署基础设施它是否能够处理请求。一个好的 解决方案是让服务实现健康检查终端节点,如图 11.10 所示。例如,Spring Boot Actuator Java 库实现了一个端点,当且仅当服务运行状况良好时,该端点返回 200,否则返回 503。同样,HealthChecks .NET 库实现终端节点 (https://docs.microsoft.com/en-us/dotnet/standard/microservices-architecture/implement-resilient-applications/monitor-app-health)。部署基础架构会定期调用此端点来确定服务实例的运行状况,并采用 适当的操作(如果运行状况不佳)。GET /actuator/healthGET /hc

A service instance needs to be able to tell the deployment infrastructure whether or not it’s able to handle requests. A good solution is for a service to implement a health check endpoint, which is shown in figure 11.10. The Spring Boot Actuator Java library, for example, implements a GET /actuator/health endpoint, which returns 200 if and only if the service is healthy, and 503 otherwise. Similarly, the HealthChecks .NET library implements a GET /hc endpoint (https://docs.microsoft.com/en-us/dotnet/standard/microservices-architecture/implement-resilient-applications/monitor-app-health). The deployment infrastructure periodically invokes this endpoint to determine the health of the service instance and takes the appropriate action if it’s unhealthy.

图 11.10.服务实现运行状况检查终端节点,部署基础设施会定期调用该终端节点以确定 服务实例的运行状况。

A 通常测试服务实例与外部服务的连接。例如,它可能会针对 一个数据库。如果所有测试都成功,则返回运行状况良好的响应,例如 HTTP 200 状态代码。如果其中任何一个失败,它将返回不正常的响应,例如 作为 HTTP 500 状态代码。Health Check Request HandlerHealth Check Request Handler

A Health Check Request Handler typically tests the service instance’s connections to external services. It might, for example, execute a test query against a database. If all the tests succeed, Health Check Request Handler returns a healthy response, such as an HTTP 200 status code. If any of them fails, it returns an unhealthy response, such as an HTTP 500 status code.

Health Check Request Handler可能只返回一个带有相应状态代码的空 HTTP 响应。或者,它可能会返回 每个适配器的运行状况。详细信息有助于进行故障排除。但是因为它可能包含敏感的 信息中,某些框架(例如 Spring Boot Actuator)允许您在运行状况端点响应中配置详细信息级别。

Health Check Request Handler might simply return an empty HTTP response with the appropriate status code. Or it might return a detailed description of the health of each of the adapters. The detailed information is useful for troubleshooting. But because it may contain sensitive information, some frameworks, such as Spring Boot Actuator, let you configure the level of detail in the health endpoint response.

使用运行状况检查时,您需要考虑两个问题。第一个是 endpoint 的实现,它 必须报告 Service 实例的运行状况。第二个问题是如何配置部署基础设施 以调用运行状况检查终端节点。我们首先看一下如何实现终端节点。

There are two issues you need to consider when using health checks. The first is the implementation of the endpoint, which must report back on the health of the service instance. The second issue is how to configure the deployment infrastructure to invoke the health check endpoint. Let’s first look at how to implement the endpoint.

实施运行状况检查终端节点

实现运行状况检查终端节点的代码必须以某种方式确定服务实例的运行状况。一种简单的方法 用于验证 Service 实例是否可以访问其外部基础设施服务。如何执行此操作取决于基础结构服务。例如,运行状况检查代码可以通过获取数据库来验证它是否已连接到 RDBMS 连接并执行测试查询。一种更复杂的方法是执行一个模拟 客户端调用服务的 API。这种运行状况检查更彻底,但可能更耗时 实现并需要更长的时间来执行。

The code that implements the health check endpoint must somehow determine the health of the service instance. One simple approach is to verify that the service instance can access its external infrastructure services. How to do this depends on the infrastructure service. The health check code can, for example, verify that it’s connected to an RDBMS by obtaining a database connection and executing a test query. A more elaborate approach is to execute a synthetic transaction that simulates the invocation of the service’s API by a client. This kind of health check is more thorough, but it’s likely to be more time consuming to implement and take longer to execute.

健康检查库的一个很好的例子是 Spring Boot Actuator。如前所述,它实现了一个终端节点。实现此终端节点的代码将返回执行一组运行状况检查的结果。通过使用约定 在配置之上, Spring Boot Actuator 根据所使用的基础设施服务实现一组合理的健康检查 由服务。例如,如果服务使用 JDBC ,则 Spring Boot Actuator 会配置执行测试查询的运行状况检查。同样,如果服务使用 RabbitMQ 消息 broker 的 Broker 实例,它会自动配置运行状况检查,以验证 RabbitMQ 服务器是否已启动。/actuator/healthDataSource

A great example of a health check library is Spring Boot Actuator. As mentioned earlier, it implements a /actuator/health endpoint. The code that implements this endpoint returns the result of executing a set of health checks. By using convention over configuration, Spring Boot Actuator implements a sensible set of health checks based on the infrastructure services used by the service. If, for example, a service uses a JDBC DataSource, Spring Boot Actuator configures a health check that executes a test query. Similarly, if the service uses the RabbitMQ message broker, it automatically configures a health check that verifies that the RabbitMQ server is up.

您还可以通过为服务实施额外的运行状况检查来自定义此行为。您实施自定义运行状况 通过定义实现接口的类来检查。此接口定义一个方法,该方法由端点的实现调用。它返回运行状况检查的结果。HealthIndicatorhealth()/actuator/health

You can also customize this behavior by implementing additional health checks for your service. You implement a custom health check by defining a class that implements the HealthIndicator interface. This interface defines a health() method, which is called by the implementation of the /actuator/health endpoint. It returns the outcome of the health check.

调用运行状况检查终端节点

如果没有人调用运行状况检查终端节点,它就没有多大用处。部署服务时,必须配置部署 infrastructure 来调用终端节点。如何执行此操作取决于部署基础结构的具体详细信息。 例如,如第 3 章所述,您可以配置一些服务注册表(例如 Netflix Eureka)来调用运行状况检查端点,以确定 是否应将流量路由到 Service 实例。第 12 章讨论了如何配置 Docker 和 Kubernetes 来调用健康检查端点。

A health check endpoint isn’t much use if nobody calls it. When you deploy your service, you must configure the deployment infrastructure to invoke the endpoint. How you do that depends on the specific details of your deployment infrastructure. For example, as described in chapter 3, you can configure some service registries, such as Netflix Eureka, to invoke the health check endpoint in order to determine whether traffic should be routed to the service instance. Chapter 12 discusses how to configure Docker and Kubernetes to invoke a health check endpoint.

11.3.2. 应用 Log 聚合模式

11.3.2. Applying the Log aggregation pattern

日志是一种有价值的故障排除工具。如果您想知道您的应用程序出了什么问题,一个好的起点是 日志文件。但是在微服务架构中使用日志是具有挑战性的。例如,假设您正在调试一个问题 替换为查询。如第 8 章所述,FTGO 应用程序使用 API 组合实现此查询。因此,您需要的日志条目分散在 API 网关和多项服务的日志文件,包括 和 。getOrderDetails()Order ServiceKitchen Service

Logs are a valuable troubleshooting tool. If you want to know what’s wrong with your application, a good place to start is the log files. But using logs in a microservice architecture is challenging. For example, imagine you’re debugging a problem with the getOrderDetails() query. As described in chapter 8, the FTGO application implements this query using API composition. As a result, the log entries you need are scattered across the log files of the API gateway and several services, including Order Service and Kitchen Service.

模式:日志聚合

将所有服务的日志聚合到一个支持搜索和警报的集中式数据库中。请参阅 http://microservices.io/patterns/observability/application-logging.html

Aggregate the logs of all services in a centralized database that supports searching and alerting. See http://microservices.io/patterns/observability/application-logging.html.

解决方案是使用日志聚合。如图 11.11 所示,日志聚合管道将所有服务实例的日志发送到一个集中的日志服务器。一次 日志由 Logging Server 存储,您可以查看、搜索和分析它们。您还可以配置以下警报 当日志中出现某些消息时触发。

The solution is to use log aggregation. As figure 11.11 shows, the log aggregation pipeline sends the logs of all of the service instances to a centralized logging server. Once the logs are stored by the logging server, you can view, search, and analyze them. You can also configure alerts that are triggered when certain messages appear in the logs.

图 11.11.日志聚合基础设施将每个服务实例的日志运送到集中式日志记录服务器。用户可以查看 并搜索日志。他们还可以设置警报,当日志条目与搜索条件匹配时触发警报。

日志记录管道和服务器通常由操作负责。但服务开发人员负责编写 生成有用日志的服务。我们首先看一下服务是如何生成日志的。

The logging pipeline and server are usually the responsibility of operations. But service developers are responsible for writing services that generate useful logs. Let’s first look at how a service generates a log.

服务如何生成日志

作为服务开发人员,您需要考虑几个问题。首先,您需要确定哪个日志记录库 使用。第二个问题是在哪里写入日志条目。我们首先看一下 logging 库。

As a service developer, there are a couple of issues you need to consider. First you need to decide which logging library to use. The second issue is where to write the log entries. Let’s first look at the logging library.

大多数编程语言都有一个或多个日志记录库,可以很容易地生成结构正确的日志条目。 例如,三个流行的 Java 日志记录库是 Logback、log4j 和 JUL (java.util.logging)。还有 SLF4J,它 是各种日志记录框架的日志记录外观 API。同样,Log4JS 是 NodeJS 的常用日志记录框架。一 使用 logging 的合理方法是在服务代码中散布对这些日志记录库之一的调用。但是,如果你有 严格的日志记录要求,则可能需要定义自己的日志记录 API,该 API 无法通过 Logging Library 强制执行 包装日志记录库。

Most programming languages have one or more logging libraries that make it easy to generate correctly structured log entries. For example, three popular Java logging libraries are Logback, log4j, and JUL (java.util.logging). There’s also SLF4J, which is a logging facade API for the various logging frameworks. Similarly, Log4JS is a popular logging framework for NodeJS. One reasonable way to use logging is to sprinkle calls to one of these logging libraries in your service’s code. But if you have strict logging requirements that can’t be enforced by the logging library, you may need to define your own logging API that wraps a logging library.

您还需要决定记录的位置。传统上,您将配置日志记录框架以写入 文件系统中的已知位置。但是,对于第 12 章中描述的更现代的部署技术,例如容器和无服务器,这通常不是最佳方法。在某些环境(如 AWS Lambda)中,甚至没有“永久”文件系统 将日志写入!相反,您的服务应登录到 。然后,部署基础设施将决定如何处理您的服务输出。stdout

You also need to decide where to log. Traditionally, you would configure the logging framework to write to a log file in a well-known location in the filesystem. But with the more modern deployment technologies, such as containers and serverless, described in chapter 12, this is often not the best approach. In some environments, such as AWS Lambda, there isn’t even a “permanent” filesystem to write the logs to! Instead, your service should log to stdout. The deployment infrastructure will then decide what to do with the output of your service.

日志聚合基础设施

日志记录基础设施负责聚合日志、存储日志并使用户能够搜索日志。一 流行的日志记录基础设施是 ELK 堆栈。ELK 由三个开源产品组成:

The logging infrastructure is responsible for aggregating the logs, storing them, and enabling the user to search them. One popular logging infrastructure is the ELK stack. ELK consists of three open source products:

  • Elasticsearch一个面向文本搜索的 NoSQL 数据库,用作日志记录服务器
  • ElasticsearchA text search-oriented NoSQL database that’s used as the logging server
  • Logstash一个日志管道,用于聚合服务日志并将其写入 Elasticsearch
  • LogstashA log pipeline that aggregates the service logs and writes them to Elasticsearch
  • 琵琶Elasticsearch 的可视化工具
  • KibanaA visualization tool for Elasticsearch

其他开源日志管道包括 Fluentd 和 Apache Flume。日志记录服务器的示例包括云服务,例如 作为 AWS CloudWatch Logs 以及众多商业产品。日志聚合是微服务中一个有用的调试工具 建筑。

Other open source log pipelines include Fluentd and Apache Flume. Examples of logging servers include cloud services, such as AWS CloudWatch Logs, as well as numerous commercial offerings. Log aggregation is a useful debugging tool in a microservice architecture.

现在让我们看看分布式跟踪,这是了解基于微服务的应用程序行为的另一种方式。

Let’s now look at distributed tracing, which is another way of understanding the behavior of a microservices-based application.

11.3.3. 使用分布式跟踪模式

11.3.3. Using the Distributed tracing pattern

假设您是一名 FTGO 开发人员,正在调查查询速度变慢的原因。您已排除了外部网络问题。延迟增加一定是原因 通过 API 网关或它已调用的服务之一。一种选择是查看每项服务的平均响应 时间。此选项的问题在于它是请求的平均值,而不是单个的时间细分 请求。此外,更复杂的方案可能涉及许多嵌套服务调用。您甚至可能不熟悉所有内容 服务业。因此,在微服务中排查和诊断这些类型的性能问题可能具有挑战性 建筑。getOrderDetails()

Imagine you’re a FTGO developer who is investigating why the getOrderDetails() query has slowed down. You’ve ruled out the problem being an external networking issue. The increased latency must be caused by either the API gateway or one of the services it has invoked. One option is to look at each service’s average response time. The trouble with this option is that it’s an average across requests rather than the timing breakdown for an individual request. Plus more complex scenarios might involve many nested service invocations. You may not even be familiar with all services. As a result, it can be challenging to troubleshoot and diagnose these kinds of performance problems in a microservice architecture.

模式:分布式跟踪

为每个外部请求分配一个唯一的 ID,并记录它如何在系统中从一个服务流向下一个服务 提供可视化和分析的服务器。请参阅 http://microservices.io/patterns/observability/distributed-tracing.html

Assign each external request a unique ID and record how it flows through the system from one service to the next in a centralized server that provides visualization and analysis. See http://microservices.io/patterns/observability/distributed-tracing.html.

了解应用程序正在执行的操作的一个好方法是使用分布式跟踪。分布式跟踪类似于整体式应用程序中的性能分析器。它记录信息(例如,开始时间和结束 time) 了解处理请求时发出的服务调用树。然后,您可以看到服务在 外部请求的处理,包括时间花费位置的细分。

A good way to get insight into what your application is doing is to use distributed tracing. Distributed tracing is analogous to a performance profiler in a monolithic application. It records information (for example, start time and end time) about the tree of service calls that are made when handling a request. You can then see how the services interact during the handling of external requests, including a breakdown of where the time is spent.

图 11.12 显示了分布式跟踪服务器如何显示 API 网关处理请求时发生的情况的示例。它显示 对 API 网关的入站请求以及网关向 .对于每个请求,分布式跟踪服务器会显示执行的操作和请求的时间。Order Service

Figure 11.12 shows an example of how a distributed tracing server displays what happens when the API gateway handles a request. It shows the inbound request to the API gateway and the request that the gateway makes to Order Service. For each request, the distributed tracing server shows the operation that’s performed and the timing of the request.

图 11.12.Zipkin 服务器显示了 FTGO 应用程序如何处理由 API 网关路由到的请求。每个请求都由一个跟踪表示。跟踪是一组 span。每个 span (可以包含子 span) 都是调用 的服务。根据收集的详细信息级别,span 还可以表示内部操作的调用 一个服务。Order Service

图 11.12 显示了分布式跟踪术语中所谓的跟踪。跟踪表示外部请求,由一个或多个 span 组成。span 表示一个操作,其关键属性是操作名称、开始时间戳和结束时间。一个 span 可以有一个 或多个子 span,表示嵌套操作。例如,顶级 span 可能表示 API 网关,如图 11.12 所示。其子 span 表示 API 网关对服务的调用。

Figure 11.12 shows what in distributed tracing terminology is called a trace. A trace represents an external request and consists of one or more spans. A span represents an operation, and its key attributes are an operation name, start timestamp, and end time. A span can have one or more child spans, which represent nested operations. For example, a top-level span might represent the invocation of the API gateway, as is the case in figure 11.12. Its child spans represent the invocations of services by the API gateway.

分布式跟踪的一个有价值的副作用是它为每个外部请求分配一个唯一的 ID。服务可以包括 其日志条目中的请求 ID。与日志聚合结合使用时,请求 ID 使您能够轻松找到所有日志条目 对于特定的外部请求。例如,以下是来自 :Order Service

A valuable side effect of distributed tracing is that it assigns a unique ID to each external request. A service can include the request ID in its log entries. When combined with log aggregation, the request ID enables you to easily find all log entries for a particular external request. For example, here’s an example log entry from Order Service:

2018-03-04 17:38:12.032 DEBUG [ftgo-order-
     service,8d8fdc37be104cc6,8d8fdc37be104cc6,false]
  7 --- [nio-8080-exec-6] org.hibernate.SQL                        :
  select order0_.id as id1_3_0_, order0_.consumer_id as consumer2_3_0_, order
     0_.city as city3_3_0_,
  order0_.delivery_state as delivery4_3_0_, order0_.street1 as street5_3_0_,
  order0_.street2 as street6_3_0_, order0_.zip as zip7_3_0_,
order0_.delivery_time as delivery8_3_0_, order0_.a
2018-03-04 17:38:12.032 DEBUG [ftgo-order-
     service,8d8fdc37be104cc6,8d8fdc37be104cc6,false]
  7 --- [nio-8080-exec-6] org.hibernate.SQL                        :
  select order0_.id as id1_3_0_, order0_.consumer_id as consumer2_3_0_, order
     0_.city as city3_3_0_,
  order0_.delivery_state as delivery4_3_0_, order0_.street1 as street5_3_0_,
  order0_.street2 as street6_3_0_, order0_.zip as zip7_3_0_,
order0_.delivery_time as delivery8_3_0_, order0_.a

日志条目的一部分(SLF4J 映射诊断上下文 — 请参阅 www.slf4j.org/manual.html)包含来自分布式跟踪基础设施的信息。它由四个值组成:[ftgo-order-service,8d8fdc37be104cc6,8d8fdc37be104cc6,false]

The [ftgo-order-service,8d8fdc37be104cc6,8d8fdc37be104cc6,false] part of the log entry (the SLF4J Mapped Diagnostic Context—see www.slf4j.org/manual.html) contains information from the distributed tracing infrastructure. It consists of four values:

  • ftgo-order-service应用程序的名称
  • ftgo-order-serviceThe name of the application
  • 8d8fdc37be104cc6traceId
  • 8d8fdc37be104cc6The traceId
  • 8d8fdc37be104cc6spanId
  • 8d8fdc37be104cc6The spanId
  • false- 指示此范围未导出到分布式跟踪服务器
  • falseIndicates that this span wasn’t exported to the distributed tracing server

如果在日志中搜索 ,您将找到该请求的所有日志条目。8d8fdc37be104cc6

If you search the logs for 8d8fdc37be104cc6, you’ll find all log entries for that request.

图 11.13 显示了分布式跟踪的工作原理。分布式跟踪有两个部分:一个插桩库,用于 以及分布式跟踪服务器。检测库管理跟踪和范围。它还将跟踪信息(例如当前跟踪 ID 和父跨度 ID)添加到出站请求中。例如,一个通用标准 用于传播跟踪信息的 B3 标准 (https://github.com/openzipkin/b3-propagation),它使用 和 等标头。检测库还会向分布式跟踪服务器报告跟踪。分布式跟踪服务器存储 跟踪记录,并提供用于可视化跟踪记录的 UI。X-B3-TraceIdX-B3-ParentSpanId

Figure 11.13 shows how distributed tracing works. There are two parts to distributed tracing: an instrumentation library, which is used by each service, and a distributed tracing server. The instrumentation library manages the traces and spans. It also adds tracing information, such as the current trace ID and the parent span ID, to outbound requests. For example, one common standard for propagating trace information is the B3 standard (https://github.com/openzipkin/b3-propagation), which uses headers such as X-B3-TraceId and X-B3-ParentSpanId. The instrumentation library also reports traces to the distributed tracing server. The distributed tracing server stores the traces and provides a UI for visualizing them.

图 11.13.每个服务(包括 API 网关)都使用一个检测库。插桩库为每个 external 请求,在服务之间传播跟踪状态,并将 REPORTS SPAN 报告到分布式跟踪服务器。

让我们从库开始,看一下检测库和分发跟踪服务器。

Let’s take a look at the instrumentation library and the distribution tracing server, beginning with the library.

使用检测库

检测库构建 Span 树并将其发送到分布式跟踪服务器。服务代码可以 直接调用 instrumentation 库,但这会将 instrumentation logic 与 business 和其他 logic交织在一起。 更简洁的方法是使用拦截器或面向方面的编程 (AOP)。

The instrumentation library builds the tree of spans and sends them to the distributed tracing server. The service code could call the instrumentation library directly, but that would intertwine the instrumentation logic with business and other logic. A cleaner approach is to use interceptors or aspect-oriented programming (AOP).

基于 AOP 的框架的一个很好的例子是 Spring Cloud Sleuth。它使用 Spring 框架的 AOP 机制来自动 将分布式跟踪集成到服务中。因此,您必须将 Spring Cloud Sleuth 添加为项目依赖项。 您的服务不需要调用分布式跟踪 API,除非 Spring Cloud Sleuth 不处理这些情况。

A great example of an AOP-based framework is Spring Cloud Sleuth. It uses the Spring Framework’s AOP mechanism to automagically integrate distributed tracing into the service. As a result, you have to add Spring Cloud Sleuth as a project dependency. Your service doesn’t need to call a distributed tracing API except in those cases that aren’t handled by Spring Cloud Sleuth.

关于分布式跟踪服务器

检测库将 span 发送到分布式跟踪服务器。分布式跟踪服务器拼合 span 一起形成完整的跟踪并将其存储在数据库中。一种流行的分布式跟踪服务器是 Open Zipkin。齐普金 最初由 Twitter 开发。服务可以使用 HTTP 或消息代理将 span 传送到 Zipkin。Zipkin 商店 存储后端(SQL 或 NoSQL 数据库)中的跟踪。它有一个显示跟踪的 UI,如前所述 在图 11.12 中。AWS X-ray 是分布式跟踪服务器的另一个示例。

The instrumentation library sends the spans to a distributed tracing server. The distributed tracing server stitches the spans together to form complete traces and stores them in a database. One popular distributed tracing server is Open Zipkin. Zipkin was originally developed by Twitter. Services can deliver spans to Zipkin using either HTTP or a message broker. Zipkin stores the traces in a storage backend, which is either a SQL or NoSQL database. It has a UI that displays traces, as shown earlier in figure 11.12. AWS X-ray is another example of a distributed tracing server.

11.3.4. 应用 Application metrics 模式

11.3.4. Applying the Application metrics pattern

生产环境的一个关键部分是监控和警报。如图 11.14 所示,监控系统从以下位置收集指标,这些指标提供有关应用程序健康状况的关键信息 技术堆栈的每个部分。指标范围包括基础设施级别的指标,例如 CPU、内存和磁盘利用率。 添加到应用程序级指标,例如服务请求延迟和执行的请求数。,例如,收集有关已下订单、已批准订单和已拒绝订单数量的指标。这些指标由一个 metrics service,该服务提供可视化和警报。Order Service

A key part of the production environment is monitoring and alerting. As figure 11.14 shows, the monitoring system gathers metrics, which provide critical information about the health of an application, from every part of the technology stack. Metrics range from infrastructure-level metrics, such as CPU, memory, and disk utilization, to application-level metrics, such as service request latency and number of requests executed. Order Service, for example, gathers metrics about the number of placed, approved, and rejected orders. The metrics are collected by a metrics service, which provides visualization and alerting.

模式:应用程序指标

服务将指标报告给提供聚合、可视化和警报的中央服务器。

Services report metrics to a central server that provides aggregation, visualization, and alerting.

图 11.14.堆栈每个级别的指标都收集并存储在指标服务中,该服务提供可视化和警报。

指标会定期采样。指标样本具有以下三个属性:

Metrics are sampled periodically. A metric sample has the following three properties:

  • 名称 - 量度的名称,例如 或jvm_memory_max_bytesplaced_orders
  • NameThe name of the metric, such as jvm_memory_max_bytes or placed_orders
  • - 一个数值
  • ValueA numeric value
  • Timestamp (时间戳) - 样本的时间
  • TimestampThe time of the sample

此外,一些监控系统支持维度的概念,维度是任意的名称-值对。例如,使用诸如 和 的维度进行报告。维度通常用于提供其他信息,例如计算机名称或服务名称或服务实例 标识符。监控系统通常沿一个或多个维度聚合(求和或平均)指标样本。jvm_memory_max_bytesarea="heap",id="PS Eden Space"area="heap",id="PS Old Gen"

In addition, some monitoring systems support the concept of dimensions, which are arbitrary name-value pairs. For example, jvm_memory_max_bytes is reported with dimensions such as area="heap",id="PS Eden Space" and area="heap",id="PS Old Gen". Dimensions are often used to provide additional information, such as the machine name or service name, or a service instance identifier. A monitoring system typically aggregates (sums or averages) metric samples along one or more dimensions.

监控的许多方面都是运营部门的责任。但是,服务开发人员负责以下两个方面: 指标。首先,他们必须检测他们的服务,以便它收集有关其行为的指标。其次,他们必须揭露 这些服务指标以及来自 JVM 和应用程序框架的指标发送到指标服务器。

Many aspects of monitoring are the responsibility of operations. But a service developer is responsible for two aspects of metrics. First, they must instrument their service so that it collects metrics about its behavior. Second, they must expose those service metrics, along with metrics from the JVM and the application framework, to the metrics server.

我们首先看一下服务如何收集指标。

Let’s first look at how a service collects metrics.

收集服务级别指标

您需要做多少工作来收集指标取决于您的应用程序使用的框架和您想要的指标 收集。例如,基于 Spring Boot 的服务可以通过将 Micrometer Metrics 库作为依赖项并使用几行配置来收集(和公开)基本指标,例如 JVM 指标。Spring Boot 的自动配置需要 负责配置指标库和公开指标。服务只需要直接使用 Micrometer Metrics API 如果它收集特定于应用程序的指标。

How much work you need to do to collect metrics depends on the frameworks that your application uses and the metrics you want to collect. A Spring Boot-based service can, for example, gather (and expose) basic metrics, such as JVM metrics, by including the Micrometer Metrics library as a dependency and using a few lines of configuration. Spring Boot’s autoconfiguration takes care of configuring the metrics library and exposing the metrics. A service only needs to use the Micrometer Metrics API directly if it gathers application-specific metrics.

以下清单显示了如何收集有关已下达、已批准和已拒绝的订单数量的指标。它使用 (接口提供的 Micrometer Metrics) 来收集自定义指标。每个方法都会递增一个适当命名的 计数器。OrderServiceMeterRegistry

The following listing shows how OrderService can collect metrics about the number of orders placed, approved, and rejected. It uses MeterRegistry, which is the interface-provided Micrometer Metrics, to gather custom metrics. Each method increments an appropriately named counter.

清单 11.1.跟踪已下达、已批准和已拒绝的订单数。OrderService
public class OrderService {

  @Autowired
  private MeterRegistry meterRegistry;                          1

  public Order createOrder(...) {
    ...
    meterRegistry.counter("placed_orders").increment();         2
     return order;
  }

  public void approveOrder(long orderId) {
    ...
    meterRegistry.counter("approved_orders").increment();       3
   }

  public void rejectOrder(long orderId) {
    ...
    meterRegistry.counter("rejected_orders").increment();       4
   }
public class OrderService {

  @Autowired
  private MeterRegistry meterRegistry;                          1

  public Order createOrder(...) {
    ...
    meterRegistry.counter("placed_orders").increment();         2
     return order;
  }

  public void approveOrder(long orderId) {
    ...
    meterRegistry.counter("approved_orders").increment();       3
   }

  public void rejectOrder(long orderId) {
    ...
    meterRegistry.counter("rejected_orders").increment();       4
   }

  • 1 用于管理特定于应用的仪表的 Micrometer Metrics 库 API
  • 1 The Micrometer Metrics library API for managing application-specific meters
  • 2 成功下订单后递增 placedOrders 计数器
  • 2 Increments the placedOrders counter when an order has successfully been placed
  • 3 当订单被批准时,增加 approvedOrders 计数器
  • 3 Increments the approvedOrders counter when an order has been approved
  • 4 当订单被拒绝时,增加 rejectedOrders 计数器
  • 4 Increments the rejectedOrders counter when an order has been rejected
将指标传送到 metrics 服务

服务通过以下两种方式之一向 Metrics Service 提供指标:推送或拉取。使用推送模型时,服务实例通过调用 API 将指标发送到 Metrics Service。例如,AWS Cloudwatch 指标 实现 push 模型。

A service delivers metrics to the Metrics Service in one of two ways: push or pull. With the push model, a service instance sends the metrics to the Metrics Service by invoking an API. AWS Cloudwatch metrics, for example, implements the push model.

使用拉取模型时,Metrics Service(或本地运行的代理)调用服务 API 以从服务中检索指标 实例。Prometheus 是一种流行的开源监控和警报系统,它使用拉取模型。

With the pull model, the Metrics Service (or its agent running locally) invokes a service API to retrieve the metrics from the service instance. Prometheus, a popular open source monitoring and alerting system, uses the pull model.

FTGO 应用程序使用该库与 Prometheus 集成。因为这个库在 Classpath 上,所以 Spring Boot 公开了一个端点,它以 Prometheus 期望的格式返回 metrics。自定义指标的报告方式如下:Order Servicemicrometer-registry-prometheusGET /actuator/prometheusOrderService

The FTGO application’s Order Service uses the micrometer-registry-prometheus library to integrate with Prometheus. Because this library is on the classpath, Spring Boot exposes a GET /actuator/prometheus endpoint, which returns metrics in the format that Prometheus expects. The custom metrics from OrderService are reported as follows:

$ curl -v http://localhost:8080/actuator/prometheus | grep _orders
# HELP placed_orders_total
# TYPE placed_orders_total counter
placed_orders_total{service="ftgo-order-service",} 1.0
# HELP approved_orders_total
# TYPE approved_orders_total counter
approved_orders_total{service="ftgo-order-service",} 1.0
$ curl -v http://localhost:8080/actuator/prometheus | grep _orders
# HELP placed_orders_total
# TYPE placed_orders_total counter
placed_orders_total{service="ftgo-order-service",} 1.0
# HELP approved_orders_total
# TYPE approved_orders_total counter
approved_orders_total{service="ftgo-order-service",} 1.0

例如,计数器报告为 类型的度量。placed_orderscounter

The placed_orders counter is, for example, reported as a metric of type counter.

Prometheus 服务器会定期轮询此终端节点以检索指标。指标位于 Prometheus 中后,您可以查看 他们使用数据可视化工具 Grafana (https://grafana.com)。您还可以为这些指标设置警报,例如当 的变化率低于某个阈值时。placed_orders_total

The Prometheus server periodically polls this endpoint to retrieve metrics. Once the metrics are in Prometheus, you can view them using Grafana, a data visualization tool (https://grafana.com). You can also set up alerts for these metrics, such as when the rate of change for placed_orders_total falls below some threshold.

应用程序指标提供了有关应用程序行为的宝贵见解。通过指标触发的警报,您可以 快速响应生产问题,也许在它影响用户之前。现在让我们看看如何观察和响应另一个 警报源:异常。

Application metrics provide valuable insights into your application’s behavior. Alerts triggered by metrics enable you to quickly respond to a production issue, perhaps before it impacts users. Let’s now look at how to observe and respond to another source of alerts: exceptions.

11.3.5. 使用 Exception 跟踪模式

11.3.5. Using the Exception tracking pattern

服务应该很少记录异常,当它记录时,确定根本原因非常重要。异常 可能是失败或编程错误的症状。查看异常的传统方法是查看日志。你可以 甚至将日志记录服务器配置为在日志文件中出现异常时提醒您。然而,存在几个问题 使用此方法:

A service should rarely log an exception, and when it does, it’s important that you identify the root cause. The exception might be a symptom of a failure or a programming bug. The traditional way to view exceptions is to look in the logs. You might even configure the logging server to alert you if an exception appears in the log file. There are, however, several problems with this approach:

  • 日志文件以单行日志条目为导向,而异常由多行组成。
  • Log files are oriented around single-line log entries, whereas exceptions consist of multiple lines.
  • 没有机制可以跟踪日志文件中发生的异常的解决情况。您必须手动复制/粘贴 异常放入问题跟踪器中。
  • There’s no mechanism to track the resolution of exceptions that occur in log files. You would have to manually copy/paste the exception into an issue tracker.
  • 可能存在重复的异常,但没有自动机制将它们视为一个异常。
  • There are likely to be duplicate exceptions, but there’s no automatic mechanism to treat them as one.
模式:异常跟踪

服务将异常报告给中心服务,该服务将重复的异常删除、生成警报并管理解决 的例外。请参阅 http://microservices.io/patterns/observability/audit-logging.html

Services report exceptions to a central service that de-duplicates exceptions, generates alerts, and manages the resolution of exceptions. See http://microservices.io/patterns/observability/audit-logging.html.

更好的方法是使用异常跟踪服务。如图 11.15 所示,您可以将服务配置为通过 REST API 等方式向异常跟踪服务报告异常。这 异常跟踪服务可删除重复的异常、生成警报并管理异常的解决。

A better approach is to use an exception tracking service. As figure 11.15 shows, you configure your service to report exceptions to an exception tracking service via, for example, a REST API. The exception tracking service de-duplicates exceptions, generates alerts, and manages the resolution of exceptions.

图 11.15.服务向异常跟踪服务报告异常,异常跟踪服务会删除重复的异常并提醒开发人员。它有 用于查看和管理异常的 UI。

异常跟踪服务

有几种异常跟踪服务。有些应用程序(例如 Honeybadger www.honeybadger.io)完全基于云。其他版本(如 Sentry.io (https://sentry.io/welcome/))也具有开源版本,您可以将其部署在自己的基础设施上。这些服务从 您的应用程序并生成警报。它们提供了一个控制台,用于查看异常并管理其解决方法。异常 跟踪服务通常提供多种语言的客户端库。

There are several exception tracking services. Some, such as Honeybadger (www.honeybadger.io), are purely cloud-based. Others, such as Sentry.io (https://sentry.io/welcome/), also have an open source version that you can deploy on your own infrastructure. These services receive exceptions from your application and generate alerts. They provide a console for viewing exceptions and managing their resolution. An exception tracking service typically provides client libraries in a variety of languages.

有几种方法可以将异常跟踪服务集成到您的应用程序中。您的服务可以调用 异常跟踪服务的 API。更好的方法是使用异常跟踪提供的客户端库 服务。例如,HoneyBadger 的客户端库提供了几种易于使用的集成机制,包括 Servlet 捕获和报告异常的筛选器。

There are a couple of ways to integrate the exception tracking service into your application. Your service could invoke the exception tracking service’s API directly. A better approach is to use a client library provided by the exception tracking service. For example, HoneyBadger’s client library provides several easy-to-use integration mechanisms, including a Servlet Filter that catches and reports exceptions.

Exception tracking pattern (异常跟踪模式) 是快速识别和响应生产问题的有用方法。

The Exception tracking pattern is a useful way to quickly identify and respond to production issues.

跟踪用户行为也很重要。让我们看看如何做到这一点。

It’s also important to track user behavior. Let’s look at how to do that.

11.3.6. 应用 Audit 日志记录模式

11.3.6. Applying the Audit logging pattern

审计日志记录的目的是记录每个用户的操作。审计日志通常用于帮助客户支持,确保 合规性,并检测可疑行为。每个审核日志条目都记录了用户的身份、他们执行的操作、 和业务对象。应用程序通常将审计日志存储在数据库表中。

The purpose of audit logging is to record each user’s actions. An audit log is typically used to help customer support, ensure compliance, and detect suspicious behavior. Each audit log entry records the identity of the user, the action they performed, and the business object(s). An application usually stores the audit log in a database table.

模式:审核日志记录

在数据库中记录用户操作,以帮助客户支持、确保合规性并检测可疑行为。请参阅 http://microservices.io/patterns/observability/audit-logging.html

Record user actions in a database in order to help customer support, ensure compliance, and detect suspicious behavior. See http://microservices.io/patterns/observability/audit-logging.html.

有几种不同的方法可以实现审计日志记录:

There are a few different ways to implement audit logging:

  • 将审计日志记录代码添加到业务逻辑中。
  • Add audit logging code to the business logic.
  • 使用面向方面的编程 (AOP)。
  • Use aspect-oriented programming (AOP).
  • 使用事件溯源。
  • Use event sourcing.

让我们看看每个选项。

Let’s look at each option.

将审计日志记录代码添加到业务逻辑中

第一个也是最直接的选项是将审计日志记录代码散布在服务的业务逻辑中。每 service 方法可以创建审计日志条目并将其保存在数据库中。这种方法的缺点是 它将审计日志记录代码和业务逻辑交织在一起,从而降低了可维护性。另一个缺点是它是 可能容易出错,因为它依赖于开发人员编写审计日志记录代码。

The first and most straightforward option is to sprinkle audit logging code throughout your service’s business logic. Each service method, for example, can create an audit log entry and save it in the database. The drawback with this approach is that it intertwines auditing logging code and business logic, which reduces maintainability. The other drawback is that it’s potentially error prone, because it relies on the developer writing audit logging code.

使用面向方面的编程

第二种选择是使用 AOP。您可以使用 AOP 框架(例如 Spring AOP)来定义自动拦截 每个服务方法调用并保留一个审核日志条目。这是一种更可靠的方法,因为它会自动 记录每个 Service Method 调用。使用 AOP 的主要缺点是通知只能访问方法名称 及其参数,因此确定正在操作的业务对象并生成面向业务的 audit log 条目。

The second option is to use AOP. You can use an AOP framework, such as Spring AOP, to define advice that automatically intercepts each service method call and persists an audit log entry. This is a much more reliable approach, because it automatically records every service method invocation. The main drawback of using AOP is that the advice only has access to the method name and its arguments, so it might be challenging to determine the business object being acted upon and generate a business-oriented audit log entry.

使用事件溯源

第三个也是最后一个选项是使用事件溯源实现您的业务逻辑。如第 6 章所述,事件溯源会自动为创建和更新操作提供审计日志。您需要在每个 事件。但是,使用事件溯源的一个限制是它不记录查询。如果您的服务必须创建日志 entries 进行查询,那么您还必须使用其他选项之一。

The third and final option is to implement your business logic using event sourcing. As mentioned in chapter 6, event sourcing automatically provides an audit log for create and update operations. You need to record the identity of the user in each event. One limitation with using event sourcing, though, is that it doesn’t record queries. If your service must create log entries for queries, then you’ll have to use one of the other options as well.

11.4. 使用微服务 chassis 模式开发服务

11.4. Developing services using the Microservice chassis pattern

本章介绍了服务必须实现的许多问题,包括度量、向 异常跟踪器、日志记录和运行状况检查、外部化配置和安全性。此外,如第 3 章所述,服务可能还需要处理服务发现并实现断路器。这不是您想要从头开始设置的东西 每次实施新服务时。如果你这样做了,你可能需要几天,甚至几周的时间才能写出你的第一篇 业务线逻辑。

This chapter has described numerous concerns that a service must implement, including metrics, reporting exceptions to an exception tracker, logging and health checks, externalized configuration, and security. Moreover, as described in chapter 3, a service may also need to handle service discovery and implement circuit breakers. That’s not something you’d want to set up from scratch each time you implement a new service. If you did, it would potentially be days, if not weeks, before you wrote your first line of business logic.

模式:微服务机箱

在处理横切关注点(如异常跟踪)的框架或框架集合上构建服务。 日志记录、运行状况检查、外部化配置和分布式跟踪。请参阅 http://microservices.io/patterns/microservice-chassis.html

Build services on a framework or collection of frameworks that handle cross-cutting concerns, such as exception tracking, logging, health checks, externalized configuration, and distributed tracing. See http://microservices.io/patterns/microservice-chassis.html.

开发服务的一种更快方法是在微服务机箱上构建服务。如图 11.16 所示,微服务 Chassis 是处理这些问题的一个框架或一组框架。使用微服务机箱时,您编写的 SAP 代码很少(如果有的话)。 代码来处理这些问题。

A much faster way to develop services is to build your services upon a microservices chassis. As figure 11.16 shows, a microservice chassis is a framework or set of frameworks that handle these concerns. When using a microservice chassis, you write little, if any, code to handle these concerns.

图 11.16.微服务 Chassis 是一个框架,可以处理许多问题,例如异常跟踪、日志记录、健康检查、 externalized configuration 和 distributed tracing 的 Scheduling Trace。

在本节中,我首先介绍了微服务机箱的概念,并提出了一些优秀的微服务机箱框架。 之后,我将介绍服务网格的概念,在撰写本文时,它正在成为一个有趣的替代方案 使用框架和库。

In this section, I first describe the concept of a microservice chassis and suggest some excellent microservice chassis frameworks. After that I introduce the concept of a service mesh, which at the time of writing is emerging as an intriguing alternative to using frameworks and libraries.

我们先来看看微服务机箱的思路。

Let’s first look at the idea of a microservice chassis.

11.4.1. 使用微服务机箱

11.4.1. Using a microservice chassis

微服务机箱是一个框架或一组框架,用于处理许多问题,包括:

A microservices chassis is a framework or set of frameworks that handle numerous concerns including the following:

  • 外部化配置
  • Externalized configuration
  • 运行状况检查
  • Health checks
  • 应用程序指标
  • Application metrics
  • 服务发现
  • Service discovery
  • 断路 器
  • Circuit breakers
  • 分布式跟踪
  • Distributed tracing

它显著减少了您需要编写的代码量。您甚至可能不需要编写任何代码。相反,您可以配置 微服务机箱以满足您的要求。微服务机箱使您能够专注于开发服务的 业务逻辑。

It significantly reduces the amount of code you need to write. You may not even need to write any code. Instead, you configure the microservice chassis to fit your requirements. A microservice chassis enables you to focus on developing your service’s business logic.

FTGO 应用程序使用 Spring Boot 和 Spring Cloud 作为微服务 Chassis。Spring Boot 提供了 externalized 配置。Spring Cloud 提供熔断器等功能。它还实现了客户端服务 发现,尽管 FTGO 应用程序依赖于基础设施进行服务发现。Spring Boot 和 Spring Cloud 并不是唯一的微服务机箱框架。例如,如果你在 GoLang 中编写服务,你可以使用 Go Kit (https://github.com/go-kit/kit) 或 Micro (https://github.com/micro/micro)。

The FTGO application uses Spring Boot and Spring Cloud as the microservice chassis. Spring Boot provides functions such as externalized configuration. Spring Cloud provides functions such as circuit breakers. It also implements client-side service discovery, although the FTGO application relies on the infrastructure for service discovery. Spring Boot and Spring Cloud aren’t the only microservice chassis frameworks. If, for example, you’re writing services in GoLang, you could use either Go Kit (https://github.com/go-kit/kit) or Micro (https://github.com/micro/micro).

使用微服务机箱的一个缺点是,您所使用的每种语言/平台组合都需要一个微服务机箱 开发服务。幸运的是,微服务机箱实现的许多功能很可能是 由基础设施实施。例如,如第 3 章所述,许多部署环境处理服务发现。此外,微服务的许多网络相关功能 Chassis 将由所谓的 Service Mesh 处理,Service Mesh 是在服务之外运行的基础设施层。

One drawback of using a microservice chassis is that you need one for every language/platform combination that you use to develop services. Fortunately, it’s likely that many of the functions implemented by a microservice chassis will instead be implemented by the infrastructure. For example, as described in chapter 3, many deployment environments handle service discovery. What’s more, many of the network-related functions of a microservice chassis will be handled by what’s known as a service mesh, an infrastructure layer running outside of the services.

11.4.2. 从微服务机箱到服务网格

11.4.2. From microservice chassis to service mesh

微服务机箱是实现各种横切关注点(如断路器)的好方法。但有一个障碍 使用微服务机箱时,您需要为使用的每种编程语言提供一个微服务机箱。例如,Spring Boot 和 Spring 如果您是 Java/Spring 开发人员,云很有用,但如果您想编写基于 NodeJS 的服务,它们就没有任何帮助。

A microservice chassis is a good way to implement various cross-cutting concerns, such as circuit breakers. But one obstacle to using a microservice chassis is that you need one for each programming language you use. For example, Spring Boot and Spring Cloud are useful if you’re a Java/Spring developer, but they aren’t any help if you want to write a NodeJS-based service.

模式:服务网格

通过实现各种关注点(包括电路)的网络层路由所有进出服务的网络流量 Breakers、分布式跟踪、服务发现、负载均衡和基于规则的流量路由。请参阅 http://microservices.io/patterns/deployment/service-mesh.html

Route all network traffic in and out of services through a networking layer that implements various concerns, including circuit breakers, distributed tracing, service discovery, load balancing, and rule-based traffic routing. See http://microservices.io/patterns/deployment/service-mesh.html.

避免此问题的新兴替代方案是在服务之外实现一些此功能。 称为服务网格。服务网格是一种网络基础设施,用于调解服务与其他服务以及外部应用程序之间的通信。 如图 11.17 所示,所有进出服务的网络流量都通过 Service Mesh。它实现了各种关注点,包括 熔断器、分布式跟踪、服务发现、负载均衡和基于规则的流量路由。服务网格可以 此外,通过在服务之间使用基于 TLS 的 IPC 来保护进程间通信。因此,您不再需要实施 这些特别关注的服务。

An emerging alternative that avoids this problem is to implement some of this functionality outside of the service in what’s known as a service mesh. A service mesh is networking infrastructure that mediates the communication between a service and other services and external applications. As figure 11.17 shows, all network traffic in and out of a service goes through the service mesh. It implements various concerns including circuit breakers, distributed tracing, service discovery, load balancing, and rule-based traffic routing. A service mesh can also secure interprocess communication by using TLS-based IPC between services. As a result, you no longer need to implement these particular concerns in the services.

图 11.17.进出服务的所有网络流量都流经服务网格。服务网格实现各种功能 包括熔断器、分布式跟踪、服务发现和负载均衡。实现的函数更少 微服务机箱。它还通过在服务之间使用基于 TLS 的 IPC 来保护进程间通信。

使用服务网格时,微服务机箱要简单得多。它只需要实现紧密的关注点 与应用程序代码集成,例如外部化配置和运行状况检查。微服务机箱必须 通过传播分布式跟踪信息(如我之前讨论的 B3 标准标头)来支持分布式跟踪 在第 11.3.3 节中。

When using a service mesh, the microservice chassis is much simpler. It only needs to implement concerns that are tightly integrated with the application code, such as externalized configuration and health checks. The microservice chassis must support distributed tracing by propagating distributed tracing information, such as the B3 standard headers I discussed earlier in section 11.3.3.

服务网格概念是一个非常有前途的想法。它使开发人员不必处理各种横切 关注。此外,服务网格路由流量的能力使您能够将部署与发布分开。它为您提供 能够将服务的新版本部署到生产环境中,但仅将其发布给特定用户,例如内部测试 用户。第 12 章在描述如何使用 Kubernetes 部署服务时进一步讨论了这个概念。

The service mesh concept is an extremely promising idea. It frees the developer from having to deal with various cross-cutting concerns. Also, the ability of a service mesh to route traffic enables you to separate deployment from release. It gives you the ability to deploy a new version of a service into production but only release it to certain users, such as internal test users. Chapter 12 discusses this concept further when describing how to deploy services using Kubernetes.

服务网格实现的当前状态

有多种服务网格实现,包括:

There are various service mesh implementations, including the following:

截至撰写本文时,Linkerd 是最成熟的,Istio 和 Conduit 仍在积极开发中。了解更多信息 关于这项令人兴奋的新技术,请查看每个产品的文档。

As of the time of writing, Linkerd is the most mature, with Istio and Conduit still under active development. For more information about this exciting new technology, take a look at each product’s documentation.

总结

Summary

  • 服务必须实现其功能要求,但它也必须是安全的、可配置的和可观察的。
  • It’s essential that a service implements its functional requirements, but it must also be secure, configurable, and observable.
  • 微服务架构中安全性的许多方面与整体式架构中的安全性没有什么不同。但是有 应用程序安全性的某些方面必然不同,包括如何在 API 之间传递用户身份 gateway 和服务以及负责身份验证和授权的人员。常用的方法是 用于对客户端进行身份验证的 API 网关。API 网关在对服务的每个请求中都包含一个透明令牌,例如 JWT。 该令牌包含委托人的身份及其角色。服务使用令牌中的信息进行授权 访问资源。OAuth 2.0 是微服务架构中安全性的良好基础。
  • Many aspects of security in a microservice architecture are no different than in a monolithic architecture. But there are some aspects of application security that are necessarily different, including how user identity is passed between the API gateway and the services and who is responsible for authentication and authorization. A commonly used approach is for the API gateway to authenticate clients. The API gateway includes a transparent token, such as a JWT, in each request to a service. The token contains the identity of the principal and their roles. The services use the information in the token to authorize access to resources. OAuth 2.0 is a good foundation for security in a microservice architecture.
  • 服务通常使用一个或多个外部服务,例如消息代理和数据库。网络位置和凭据 通常取决于服务运行的环境。您必须应用 Externalized 配置 pattern 并实现一种机制,该机制在运行时为服务提供配置属性。一种常用的方法 用于部署基础结构通过操作系统环境变量或属性 文件。另一个选项是服务实例从配置中检索其配置 Properties 服务器。
  • A service typically uses one or more external services, such as message brokers and databases. The network location and credentials of each external service often depend on the environment that the service is running in. You must apply the Externalized configuration pattern and implement a mechanism that provides a service with configuration properties at runtime. One commonly used approach is for the deployment infrastructure to supply those properties via operating system environment variables or a properties file when it creates a service instance. Another option is for a service instance to retrieve its configuration from a configuration properties server.
  • 运营和开发人员共同负责实现可观测性模式。运营负责 可观测性基础设施,例如处理日志聚合、指标、异常跟踪和分布式的服务器 描图。开发人员负责确保其服务是可观察的。服务必须具有运行状况检查 API 终端节点, 生成日志条目,收集和公开指标,向异常跟踪服务报告异常,并实施分布式 描图。
  • Operations and developers share responsibility for implementing the observability patterns. Operations is responsible for the observability infrastructure, such as servers that handle log aggregation, metrics, exception tracking, and distributed tracing. Developers are responsible for ensuring that their services are observable. Services must have health check API endpoints, generate log entries, collect and expose metrics, report exceptions to an exception tracking service, and implement distributed tracing.
  • 为了简化和加速开发,您应该在微服务机箱之上开发服务。微服务 Chassis 是处理各种横切关注点(包括本章中描述的关注点)的框架或框架集。 不过,随着时间的推移,微服务机箱的许多与网络相关的功能很可能会迁移到 Service Mesh,一个基础设施软件层,服务的所有网络流量都流经该层。
  • In order to simplify and accelerate development, you should develop services on top of a microservices chassis. A microservices chassis is framework or set of frameworks that handle various cross-cutting concerns, including those described in this chapter. Over time, though, it’s likely that many of the networking-related functions of a microservice chassis will migrate into a service mesh, a layer of infrastructure software through which all of a service’s network traffic flows.

第 12 章.部署微服务

Chapter 12. Deploying microservices

本章涵盖

This chapter covers

  • 四种关键部署模式、它们的工作原理及其优点和缺点:

    • 特定于语言的打包格式
    • 将服务部署为 VM
    • 将服务部署为容器
    • 无服务器部署
  • The four key deployment patterns, how they work, and their benefits and drawbacks:

    • Language-specific packaging format
    • Deploying a service as a VM
    • Deploying a service as a container
    • Serverless deployment
  • 使用 Kubernetes 部署服务
  • Deploying services with Kubernetes
  • 使用服务网格将部署与发布分开
  • Using a service mesh to separate deployment from release
  • 使用 AWS Lambda 部署服务
  • Deploying services with AWS Lambda
  • 选择部署模式
  • Picking a deployment pattern

Mary 和她在 FTGO 的团队几乎完成了他们的第一个服务的编写。虽然功能尚未完成,但它正在运行 在开发人员笔记本电脑和 Jenkins CI 服务器上。但这还不够好。软件在运行之前对 FTGO 没有任何价值 在生产环境中并可供用户使用。FTGO 需要将其服务部署到生产环境中。

Mary and her team at FTGO are almost finished writing their first service. Although it’s not yet feature complete, it’s running on developer laptops and the Jenkins CI server. But that’s not good enough. Software has no value to FTGO until it’s running in production and available to users. FTGO needs to deploy their service into production.

部署是两个相互关联的概念的组合:流程和架构。部署过程包括以下步骤: 必须由人员(开发人员和运营人员)执行,以便将软件投入生产。部署架构 定义运行该软件的环境的结构。此后,部署的两个方面都发生了根本性的变化 我在 1990 年代后期首次开始开发 Enterprise Java 应用程序。开发人员抛出代码的手动过程 Over the Wall to Production 已经变得高度自动化。如图 12.1 所示,物理生产环境已被越来越轻量级和短暂的计算基础设施所取代。

Deployment is a combination of two interrelated concepts: process and architecture. The deployment process consists of the steps that must be performed by people—developers and operations—in order to get software into production. The deployment architecture defines the structure of the environment in which that software runs. Both aspects of deployment have changed radically since I first started developing Enterprise Java applications in the late 1990s. The manual process of developers throwing code over the wall to production has become highly automated. As figure 12.1 shows, physical production environments have been replaced by increasingly lightweight and ephemeral computing infrastructure.

图 12.1.重量级和长寿命的物理机器已经被越来越轻量级和短暂的技术所抽象化。

回到 1990 年代,如果您想将应用程序部署到生产环境中,第一步是将您的应用程序投入使用 用一套操作说明翻墙操作。例如,您可以提交故障单询问操作 以部署应用程序。接下来发生的一切都完全是运营的责任,除非他们遇到 他们需要您帮助来解决的问题。通常,运营部门购买并安装了昂贵的重量级应用程序服务器 例如 WebLogic 或 WebSphere。然后,他们将登录到应用程序服务器控制台并部署您的应用程序。他们 会像宠物一样充满爱心地照顾这些机器,安装补丁和更新软件。

Back in the 1990s, if you wanted to deploy an application into production, the first step was to throw your application along with a set of operating instructions over the wall to operations. You might, for example, file a trouble ticket asking operations to deploy the application. Whatever happened next was entirely the responsibility of operations, unless they encountered a problem they needed your help to fix. Typically, operations bought and installed expensive and heavyweight application servers such as WebLogic or WebSphere. Then they would log in to the application server console and deploy your applications. They would lovingly care for those machines, as if they were pets, installing patches and updating the software.

在 2000 年代中期,昂贵的应用程序服务器被开源、轻量级的 Web 容器(如 Apache)所取代 Tomcat 和 Jetty。您仍然可以在每个 Web 容器上运行多个应用程序,但每个 Web 容器都有一个应用程序 变得可行。此外,虚拟机开始取代物理机。但机器仍然被视为心爱的宠物, 部署基本上仍然是一个手动过程。

In the mid 2000s, the expensive application servers were replaced with open source, lightweight web containers such as Apache Tomcat and Jetty. You could still run multiple applications on each web container, but having one application per web container became feasible. Also, virtual machines started to replace physical machines. But machines were still treated as beloved pets, and deployment was still a fundamentally manual process.

如今,部署过程已大不相同。采用 的 DevOps 意味着开发团队还负责部署他们的应用程序或服务。在某些组织中, Operations 为开发人员提供了一个用于部署其代码的控制台。或者,更好的是,一旦测试通过,部署 Pipeline 会自动将代码部署到生产环境中。

Today, the deployment process is radically different. Instead of handing off code to a separate production team, the adoption of DevOps means that the development team is also responsible for deploying their application or services. In some organizations, operations provides developers with a console for deploying their code. Or, better yet, once the tests pass, the deployment pipeline automatically deploys the code into production.

随着物理机的抽象化,生产环境中使用的计算资源也发生了根本性的变化 離開。在高度自动化的云(如 AWS)上运行的虚拟机已经取代了长期存在的、类似宠物的物理和 虚拟机。今天的虚拟机是不可变的。它们被当作一次性牛而不是宠物处理,并被丢弃 并重新创建而不是重新配置。容器是虚拟机之上的更轻量级的抽象层,是一种越来越流行的应用程序部署方式。 对于许多使用案例,您还可以使用更轻量级的无服务器部署平台,例如 AWS Lambda。

The computing resources used in a production environment have also changed radically with physical machines being abstracted away. Virtual machines running on a highly automated cloud, such as AWS, have replaced the long-lived, pet-like physical and virtual machines. Today’s virtual machines are immutable. They’re treated as disposable cattle instead of pets and are discarded and recreated rather than being reconfigured. Containers, an even more lightweight abstraction layer of top of virtual machines, are an increasingly popular way of deploying applications. You can also use an even more lightweight serverless deployment platform, such as AWS Lambda, for many use cases.

部署流程和架构的演变与 微服务架构。一个应用程序可能有数十或数百个以各种语言编写的服务,并且 框架。因为每个服务都是一个小型应用程序,这意味着您有数十或数百个应用程序正在生产中。 例如,系统管理员手动配置服务器和服务已不再实用。如果要部署 微服务,您需要高度自动化的部署流程和基础设施。

It’s no coincidence that the evolution of deployment processes and architectures has coincided with the growing adoption of the microservice architecture. An application might have tens or hundreds of services written in a variety of languages and frameworks. Because each service is a small application, that means you have tens or hundreds of applications in production. It’s no longer practical, for example, for system administrators to hand configure servers and services. If you want to deploy microservices at scale, you need a highly automated deployment process and infrastructure.

图 12.2 显示了生产环境的高级视图。生产环境使开发人员能够配置和管理 他们的服务、部署新版本服务的部署管道,以及访问这些服务实现的功能的用户。

Figure 12.2 shows a high-level view of a production environment. The production environment enables developers to configure and manage their services, the deployment pipeline to deploy new versions of services, and users to access functionality implemented by those services.

图 12.2.生产环境的简化视图。它提供四个主要功能:服务管理使开发人员 为了部署和管理他们的服务,Runtime Management 确保服务正在运行,Monitoring Visualizes Service 行为并生成警报,请求路由将请求从用户路由到服务。

生产环境必须实现四个关键功能:

A production environment must implement four key capabilities:

  • 服务管理界面使开发人员能够创建、更新和配置服务。理想情况下,此接口是由命令行调用的 REST API 和 GUI 部署工具。
  • Service management interfaceEnables developers to create, update, and configure services. Ideally, this interface is a REST API invoked by command-line and GUI deployment tools.
  • 运行时服务管理尝试确保所需数量的服务实例始终处于运行状态。如果服务实例崩溃或 无法处理请求,则生产环境必须重新启动它。如果机器崩溃,则生产环境 必须在另一台计算机上重新启动这些服务实例。
  • Runtime service managementAttempts to ensure that the desired number of service instances is running at all times. If a service instance crashes or is somehow unable to handle requests, the production environment must restart it. If a machine crashes, the production environment must restart those service instances on a different machine.
  • 监控使开发人员能够深入了解其服务正在做什么,包括日志文件和指标。如果出现问题, 生产环境必须提醒开发人员。第 11 章介绍了监控,也称为可观测性
  • MonitoringProvides developers with insight into what their services are doing, including log files and metrics. If there are problems, the production environment must alert the developers. Chapter 11 describes monitoring, also called observability.
  • 请求路由将用户的请求路由到服务。
  • Request routingRoutes requests from users to the services.

在本章中,我将讨论四个主要的部署选项:

In this chapter I discuss the four main deployment options:

  • 将服务部署为特定于语言的包,例如 Java JAR 或 WAR 文件。值得探索此选项,因为 尽管我建议使用其他选项之一,但它的缺点会激发其他选项的动力。
  • Deploying services as language-specific packages, such as Java JAR or WAR files. It’s worthwhile exploring this option, because even though I recommend using one of the other options, its drawbacks motivate the other options.
  • 将服务部署为虚拟机,通过将服务打包为虚拟机映像来简化部署,该映像 封装服务的技术堆栈。
  • Deploying services as virtual machines, which simplifies deployment by packaging a service as a virtual machine image that encapsulate the service’s technology stack.
  • 将服务部署为容器,这比虚拟机更轻量级。我将展示如何使用 Kubernetes(一种流行的 Docker 编排框架)部署 FTGO 应用程序。Restaurant Service
  • Deploying services as containers, which are more lightweight than virtual machines. I show how to deploy the FTGO application’s Restaurant Service using Kubernetes, a popular Docker orchestration framework.
  • 使用无服务器部署部署服务,这甚至比容器更现代。我们将了解如何使用 AWS Lambda(一种流行的无服务器平台)进行部署。Restaurant Service
  • Deploying services using serverless deployment, which is even more modern than containers. We’ll look at how to deploy Restaurant Service using AWS Lambda, a popular serverless platform.

我们首先看一下如何将服务部署为特定于语言的包。

Let’s first look at how to deploy services as language-specific packages.

12.1. 使用 Language-specific packaging format 模式部署服务

12.1. Deploying services using the Language-specific packaging format pattern

假设您要部署 FTGO 应用程序的 ,这是一个基于 Spring Boot 的 Java 应用程序。部署此服务的一种方法是将 Service 用作特定于语言的 package 模式。使用此模式时,在生产环境中部署的内容以及由服务运行时管理的内容是服务 在其特定于语言的包中。对于 ,则为可执行 JAR 文件或 WAR 文件。对于其他语言(如 NodeJS),服务是源目录 代码和模块。对于某些语言(如 GoLang),服务是特定于操作系统的可执行文件。Restaurant ServiceRestaurant Service

Let’s imagine that you want to deploy the FTGO application’s Restaurant Service, which is a Spring Boot-based Java application. One way to deploy this service is by using the Service as a language-specific package pattern. When using this pattern, what’s deployed in production and what’s managed by the service runtime is a service in its language-specific package. In the case of Restaurant Service, that’s either the executable JAR file or a WAR file. For other languages, such as NodeJS, a service is a directory of source code and modules. For some languages, such as GoLang, a service is an operating system-specific executable.

模式:特定于语言的打包格式

将特定于语言的包部署到生产环境中。请参阅 http://microservices.io/patterns/deployment/language-specific-packaging.html

Deploy a language-specific package into production. See http://microservices.io/patterns/deployment/language-specific-packaging.html.

要在计算机上部署,您首先需要安装必要的运行时,在本例中为 JDK。如果它是一个 WAR 文件,您还会 需要安装 Apache Tomcat 等 Web 容器。配置计算机后,将包复制到计算机 并启动服务。每个服务实例都作为 JVM 进程运行。Restaurant Service

To deploy Restaurant Service on a machine, you would first install the necessary runtime, which in this case is the JDK. If it’s a WAR file, you also need to install a web container such as Apache Tomcat. Once you’ve configured the machine, you copy the package to the machine and start the service. Each service instance runs as a JVM process.

理想情况下,您已经设置了部署管道以自动将服务部署到生产环境,如图 12.3 所示。部署管道构建可执行的 JAR 文件或 WAR 文件。然后,它调用生产环境的服务 管理界面部署新版本。

Ideally, you’ve set up your deployment pipeline to automatically deploy the service to production, as shown in figure 12.3. The deployment pipeline builds an executable JAR file or WAR file. It then invokes the production environment’s service management interface to deploy the new version.

图 12.3.部署管道构建可执行 JAR 文件并将其部署到生产环境中。在生产环境中,每个服务实例 是在安装了 JDK 或 JRE 的计算机上运行的 JVM。

服务实例通常是单个进程,但有时可能是一组进程。Java 服务实例,例如 是运行 JVM 的进程。一个 NodeJS 服务可能会生成多个工作进程,以便同时处理请求。 某些语言支持在同一进程中部署多个服务实例。

A service instance is typically a single process but sometimes may be a group of processes. A Java service instance, for example, is a process running the JVM. A NodeJS service might spawn multiple worker processes in order to process requests concurrently. Some languages support deploying multiple service instances within the same process.

有时,您可能会在计算机上部署单个服务实例,同时保留部署多个服务实例的选项 在同一台计算机上。例如,如图 12.4 所示,您可以在一台机器上运行多个 JVM。每个 JVM 运行一个服务实例。

Sometimes you might deploy a single service instance on a machine, while retaining the option to deploy multiple service instances on the same machine. For example, as figure 12.4 shows, you could run multiple JVMs on a single machine. Each JVM runs a single service instance.

图 12.4.在同一台计算机上部署多个服务实例。它们可能是同一服务的实例,也可能是不同服务的实例 服务业。操作系统的开销在服务实例之间共享。每个服务实例都是一个单独的进程,因此有 他们之间有些孤立。

某些语言还允许您在单个进程中运行多个服务实例。例如,如图 12.5 所示,您可以在单个 Apache Tomcat 上运行多个 Java 服务。

Some languages also let you run multiple services instances in a single process. For example, as figure 12.5 shows, you can run multiple Java services on a single Apache Tomcat.

图 12.5.在同一 Web 容器或应用程序服务器上部署多个服务实例。它们可能是相同 service 或不同服务的实例。OS 和运行时的开销在所有服务实例之间共享。 但是,由于服务实例位于同一进程中,因此它们之间没有隔离。

在传统的昂贵且重量级的应用程序服务器上部署应用程序时,通常使用此方法,例如 作为 WebLogic 和 WebSphere 进行验证。您还可以将服务打包为 OSGI 捆绑包,并在每个 OSGI 容器中运行多个服务实例。

This approach is commonly used when deploying applications on traditional expensive and heavyweight application servers, such as WebLogic and WebSphere. You can also package services as OSGI bundles and run multiple service instances in each OSGI container.

Service as a language-specific 包模式既有优点也有缺点。让我们首先看看它们的好处。

The Service as a language-specific package pattern has both benefits and drawbacks. Let’s first look at the benefits.

12.1.1. 服务作为特定于语言的包模式的好处

12.1.1. Benefits of the Service as a language-specific package pattern

服务即特定于语言的包模式具有一些好处:

The Service as a language-specific package pattern has a few benefits:

  • 快速部署
  • Fast deployment
  • 高效的资源利用率,尤其是在同一台机器上或同一进程中运行多个实例时
  • Efficient resource utilization, especially when running multiple instances on the same machine or within the same process

让我们看看每一个。

Let’s look at each one.

快速部署

此模式的一个主要优点是部署服务实例的速度相对较快:您可以将服务复制到主机 并启动它。如果服务是用 Java 编写的,则复制 JAR 或 WAR 文件。对于其他语言,例如 NodeJS 或 Ruby, 您复制源代码。在任何一种情况下,通过网络复制的字节数都相对较小。

One major benefit of this pattern is that deploying a service instance is relatively fast: you copy the service to a host and start it. If the service is written in Java, you copy a JAR or WAR file. For other languages, such as NodeJS or Ruby, you copy the source code. In either case, the number of bytes copied over the network is relatively small.

此外,启动服务很少耗时。如果服务是它自己的进程,则启动它。否则,如果服务 是在同一容器进程中运行的多个实例之一,您可以将其动态部署到容器中,或者 重新启动容器。由于没有开销,因此启动服务通常很快。

Also, starting a service is rarely time consuming. If the service is its own process, you start it. Otherwise, if the service is one of several instances running in the same container process, you either dynamically deploy it into the container or restart the container. Because of the lack of overhead, starting a service is usually fast.

高效的资源利用率

此模式的另一个主要优点是它相对有效地使用资源。多个服务实例共享 machine 及其操作系统。如果多个服务实例在同一进程中运行,则效率会更高。为 例如,多个 Web 应用程序可以共享同一个 Apache Tomcat 服务器和 JVM。

Another major benefit of this pattern is that it uses resources relatively efficiently. Multiple service instances share the machine and its operating system. It’s even more efficient if multiple service instances run within the same process. For example, multiple web applications could share the same Apache Tomcat server and JVM.

12.1.2. Service 作为特定于语言的包模式的缺点

12.1.2. Drawbacks of the Service as a language-specific package pattern

尽管它很有吸引力,但作为特定于语言的包模式的服务有几个明显的缺点:

Despite its appeal, the Service as a language-specific package pattern has several significant drawbacks:

  • 缺乏对技术堆栈的封装。
  • Lack of encapsulation of the technology stack.
  • 无法限制服务实例消耗的资源。
  • No ability to constrain the resources consumed by a service instance.
  • 在同一台计算机上运行多个服务实例时缺少隔离。
  • Lack of isolation when running multiple service instances on the same machine.
  • 自动确定服务实例的放置位置具有挑战性。
  • Automatically determining where to place service instances is challenging.

让我们看看每个缺点。

Let’s look at each drawback.

缺乏技术堆栈的封装

运营团队必须了解如何部署每项服务的具体细节。每个服务都需要一个特定的 版本。例如,Java Web 应用程序需要特定版本的 Apache Tomcat 和 JDK。操作 必须安装每个所需软件包的正确版本。

The operation team must know the specific details of how to deploy each and every service. Each service needs a particular version of the runtime. A Java web application, for example, needs particular versions of Apache Tomcat and the JDK. Operations must install the correct version of each required software package.

更糟糕的是,服务可以用各种语言和框架编写。它们也可能以多个 这些语言和框架的版本。因此,开发团队必须与运营部门共享大量详细信息。 这种复杂性增加了部署过程中出错的风险。例如,一台机器可能具有错误的 语言运行时。

To make matters worse, services can be written in a variety of languages and frameworks. They might also be written in multiple versions of those languages and frameworks. Consequently, the development team must share lots of details with operations. This complexity increases the risk of errors during deployment. A machine might, for example, have the wrong version of the language runtime.

无法限制服务实例消耗的资源

另一个缺点是您无法限制服务实例使用的资源。进程可能会消耗 机器的所有 CPU 或内存,使其他服务实例和操作系统的资源不足。这可能会发生, 例如,因为一个 bug。

Another drawback is that you can’t constrain the resources consumed by a service instance. A process can potentially consume all of a machine’s CPU or memory, starving other service instances and operating systems of resources. This might happen, for example, because of a bug.

在同一台计算机上运行多个服务实例时缺少隔离

在同一台计算机上运行多个实例时,问题会更加严重。缺乏隔离意味着行为不端 Service 实例可能会影响其他 Service 实例。因此,应用程序存在不可靠的风险,尤其是在 在同一台计算机上运行多个服务实例。

The problem is even worse when running multiple instances on the same machine. The lack of isolation means that a misbehaving service instance can impact other service instances. As a result, the application risks being unreliable, especially when running multiple service instances on the same machine.

自动确定服务实例的放置位置是一项挑战

在同一台计算机上运行多个服务实例的另一个挑战是确定服务实例的位置。 每台计算机都有一组固定的资源、CPU、内存等,并且每个服务实例都需要一定量的资源。 请务必以高效使用计算机而不会使计算机过载的方式将服务实例分配给计算机。 正如我稍后解释的那样,基于 VM 的云和容器编排框架会自动处理此问题。部署服务时 原生情况下,您可能需要手动决定版面。

Another challenge with running multiple service instances on the same machine is determining the placement of service instances. Each machine has a fixed set of resources, CPU, memory, and so on, and each service instance needs some amount of resources. It’s important to assign service instances to machines in a way that uses the machines efficiently without overloading them. As I explain shortly, VM-based clouds and container orchestration frameworks handle this automatically. When deploying services natively, it’s likely that you’ll need to manually decide the placement.

正如你所看到的,尽管它很熟悉,但作为特定于语言的包模式的服务也有一些明显的缺点。 您应该很少使用此方法,除非效率大于所有其他问题。

As you can see, despite its familiarity, the Service as a language-specific package pattern has some significant drawbacks. You should rarely use this approach, except perhaps when efficiency outweighs all other concerns.

现在,让我们看看部署可避免这些问题的服务的现代方法。

Let’s now look at modern ways of deploying services that avoid these problems.

12.2. 使用 Service 作为虚拟机模式部署服务

12.2. Deploying services using the Service as a virtual machine pattern

再一次,假设您要部署 FTGO,只不过这次它在 AWS EC2 上。一种选择是创建和配置 EC2 实例,并将可执行文件复制到该实例上 或 WAR 文件。尽管使用云会带来一些好处,但这种方法存在所描述的缺点 在上一节中。一种更好、更现代的方法是将服务打包为 Amazon 系统映像 (AMI),如图所示 在图 12.6 中。每个服务实例都是从该 AMI 创建的 EC2 实例。EC2 实例通常由 AWS Auto 管理 Scaling group,它尝试确保所需数量的运行状况良好的实例始终运行。Restaurant Service

Once again, imagine you want to deploy the FTGO Restaurant Service, except this time it’s on AWS EC2. One option would be to create and configure an EC2 instance and copy onto it the executable or WAR file. Although you would get some benefit from using the cloud, this approach suffers from the drawbacks described in the preceding section. A better, more modern approach is to package the service as an Amazon Machine Image (AMI), as shown in figure 12.6. Each service instance is an EC2 instance created from that AMI. The EC2 instances would typically be managed by an AWS Auto Scaling group, which attempts to ensure that the desired number of healthy instances is always running.

模式:将服务部署为 VM

将打包为 VM 映像的服务部署到生产环境中。每个服务实例都是一个 VM。请参阅 http://microservices.io/patterns/deployment/service-per-vm.html

Deploy services packaged as VM images into production. Each service instance is a VM. See http://microservices.io/patterns/deployment/service-per-vm.html.

图 12.6.部署管道将服务打包为虚拟机映像,例如 EC2 AMI,其中包含所需的一切 运行服务,包括语言运行时。在运行时,每个服务实例都是一个实例化的 VM,例如 EC2 实例 从那张图片中。EC2 Elastic Load Balancer 将请求路由到实例。

虚拟机映像由服务的部署管道构建。如图 12.6 所示,部署管道运行 VM 镜像构建器来创建一个 VM 镜像,其中包含服务的代码以及 运行它。例如,FTGO 服务的 VM 构建器会安装 JDK 和服务的可执行 JAR。VM 映像生成器 将 VM 映像计算机配置为在 VM 引导时使用 Linux 的系统(如 upstart)运行应用程序。init

The virtual machine image is built by the service’s deployment pipeline. The deployment pipeline, as figure 12.6 shows, runs a VM image builder to create a VM image that contains the service’s code and whatever software is required to run it. For example, the VM builder for a FTGO service installs the JDK and the service’s executable JAR. The VM image builder configures the VM image machine to run the application when the VM boots, using Linux’s init system, such as upstart.

部署管道可以使用多种工具来构建 VM 映像。用于创建 EC2 AMI 的早期工具 是由 Netflix 创建的 Aminator,Netflix 使用它在 AWS (https://github.com/Netflix/aminator) 上部署其视频流服务。更现代的 VM 映像构建器是 Packer,它与 Aminator 不同,它支持各种虚拟化技术,包括 EC2、Digital Ocean、Virtual Box 和 VMware (www.packer.io)。要使用 Packer 创建 AMI,您需要编写一个配置文件来指定基础映像和一组预置程序 安装软件并配置 AMI。

There are a variety of tools that your deployment pipeline can use to build VM images. One early tool for creating EC2 AMIs is Aminator, created by Netflix, which used it to deploy its video-streaming service on AWS (https://github.com/Netflix/aminator). A more modern VM image builder is Packer, which unlike Aminator supports a variety of virtualization technologies, including EC2, Digital Ocean, Virtual Box, and VMware (www.packer.io). To use Packer to create an AMI, you write a configuration file that specifies the base image and a set of provisioners that install software and configure the AMI.

关于 Elastic Beanstalk

Elastic Beanstalk 由 AWS 提供,是使用 VM 部署服务的一种简单方法。您上传您的代码,例如 作为 WAR 文件,Elastic Beanstalk 将其部署为一个或多个负载均衡和托管的 EC2 实例。Elastic Beanstalk 可能不像 Kubernetes 那样时髦,但它是部署基于微服务的应用程序的一种简单方法 在 EC2 上。

Elastic Beanstalk, which is provided by AWS, is an easy way to deploy your services using VMs. You upload your code, such as a WAR file, and Elastic Beanstalk deploys it as one or more load-balanced and managed EC2 instances. Elastic Beanstalk is perhaps not quite as fashionable as, say, Kubernetes, but it’s an easy way to deploy a microservices-based application on EC2.

有趣的是,Elastic Beanstalk 结合了本章中描述的三种部署模式的元素。它支持 适用于多种语言的多种打包格式,包括 Java、Ruby 和 .NET。它将应用程序部署为 VM,但 而不是构建 AMI,它使用在启动时安装应用程序的基础映像。

Interestingly, Elastic Beanstalk combines elements of the three deployment patterns described in this chapter. It supports several packaging formats for several languages, including Java, Ruby, and .NET. It deploys the application as VMs, but rather than building an AMI, it uses a base image that installs the application on startup.

Elastic Beanstalk 还可以部署 Docker 容器。每个 EC2 实例都运行一个或多个容器的集合。与 Docker 编排框架,扩展单位是 EC2 实例,而不是容器。

Elastic Beanstalk can also deploy Docker containers. Each EC2 instance runs a collection of one or more containers. Unlike a Docker orchestration framework, covered later in the chapter, the unit of scaling is the EC2 instance rather than a container.

让我们看看使用这种方法的优点和缺点。

Let’s look at the benefits and drawbacks of using this approach.

12.2.1. 将服务部署为 VM 的好处

12.2.1. The benefits of deploying services as VMs

服务即虚拟机模式具有许多优点:

The Service as a virtual machine pattern has a number of benefits:

  • VM 映像封装了技术堆栈。
  • The VM image encapsulates the technology stack.
  • 隔离的服务实例。
  • Isolated service instances.
  • 使用成熟的云基础设施。
  • Uses mature cloud infrastructure.

让我们看看每一个。

Let’s look at each one.

VM 映像封装了技术堆栈

此模式的一个重要优点是 VM 映像包含服务及其所有依赖项。它消除了 正确安装和设置服务运行所需的软件的易出错要求。一次服务 已打包为虚拟机,则它将成为封装服务技术堆栈的黑匣子。VM 映像 无需修改即可部署在任何地方。用于部署服务的 API 将成为 VM 管理 API。部署 变得更加简单和可靠。

An important benefit of this pattern is that the VM image contains the service and all of its dependencies. It eliminates the error-prone requirement to correctly install and set up the software that a service needs in order to run. Once a service has been packaged as a virtual machine, it becomes a black box that encapsulates your service’s technology stack. The VM image can be deployed anywhere without modification. The API for deploying the service becomes the VM management API. Deployment becomes much simpler and more reliable.

服务实例是隔离的

虚拟机的一个主要优点是每个服务实例都完全隔离地运行。毕竟,这是 虚拟机技术的主要目标。每个虚拟机都有固定数量的 CPU 和内存,无法窃取资源 来自其他服务。

A major benefit of virtual machines is that each service instance runs in complete isolation. That, after all, is one of the main goals of virtual machine technology. Each virtual machine has a fixed amount of CPU and memory and can’t steal resources from other services.

使用成熟的云基础设施

将微服务部署为虚拟机的另一个好处是,您可以利用成熟的、高度自动化的云 基础设施。公共云(如 AWS)尝试以避免过载的方式在物理机上调度 VM。 机器。它们还提供有价值的功能,例如跨 VM 的流量负载均衡和自动缩放。

Another benefit of deploying your microservices as virtual machines is that you can leverage mature, highly automated cloud infrastructure. Public clouds such as AWS attempt to schedule VMs on physical machines in a way that avoids overloading the machine. They also provide valuable features such as load balancing of traffic across VMs and autoscaling.

12.2.2. 将服务部署为 VM 的缺点

12.2.2. The drawbacks of deploying services as VMs

服务即 VM 模式也有一些缺点:

The Service as a VM pattern also has some drawbacks:

  • 资源利用率降低
  • Less-efficient resource utilization
  • 部署速度相对较慢
  • Relatively slow deployments
  • 系统管理开销
  • System administration overhead

让我们依次看看每个缺点。

Let’s look at each drawback in turn.

资源利用率降低

每个服务实例都有整个虚拟机(包括其操作系统)的开销。此外,典型的公众 IaaS 虚拟机提供的 VM 大小有限,因此 VM 可能未得到充分利用。这不太可能 对于基于 Java 的服务来说,这是一个问题,因为它们相对来说是重量级的。但这种模式可能是一种低效的 部署轻量级 NodeJS 和 GoLang 服务。

Each service instance has the overhead of an entire virtual machine, including its operating system. Moreover, a typical public IaaS virtual machine offers a limited set of VM sizes, so the VM will probably be underutilized. This is less likely to be a problem for Java-based services because they’re relatively heavyweight. But this pattern might be an inefficient way of deploying lightweight NodeJS and GoLang services.

部署速度相对较慢

由于 VM 的大小,构建 VM 映像通常需要几分钟时间。有很多位需要移动 通过网络。此外,从 VM 映像实例化 VM 非常耗时,因为 必须通过网络移动。在 VM 中运行的操作系统也需要一些时间来启动,但是一个相对的术语。此过程可能需要几分钟,但比传统的部署过程要快得多。但 它比您即将阅读的更轻量级的部署模式要慢得多。

Building a VM image typically takes some number of minutes because of the size of the VM. There are lots of bits to be moved over the network. Also, instantiating a VM from a VM image is time consuming because of, once again, the amount of data that must be moved over the network. The operating system running inside the VM also takes some time to boot, though slow is a relative term. This process, which perhaps takes minutes, is much faster than the traditional deployment process. But it’s much slower than the more lightweight deployment patterns you’ll read about soon.

系统管理开销

您负责修补操作系统和运行时。部署时,系统管理似乎是不可避免的 软件,但在后面的 12.5 节中,我将介绍无服务器部署,它消除了这种系统管理。

You’re responsible for patching the operation system and runtime. System administration may seem inevitable when deploying software, but later in section 12.5, I describe serverless deployment, which eliminates this kind of system administration.

现在,让我们看看部署微服务的另一种方法,该方法更轻量级,但仍具有以下许多优势 虚拟机。

Let’s now look at an alternative way to deploy microservices that’s more lightweight, yet still has many of the benefits of virtual machines.

12.3. 使用 Service 作为容器模式部署服务

12.3. Deploying services using the Service as a container pattern

容器是一种更现代、更轻量级的部署机制。它们是操作系统级的虚拟化机制。 如图 12.7 所示,容器通常由一个进程组成,但有时由多个进程组成,这些进程在沙箱中运行,从而将其与其他容器隔离开来。 例如,运行 Java 服务的容器通常由 JVM 进程组成。

Containers are a more modern and lightweight deployment mechanism. They’re an operating-system-level virtualization mechanism. A container, as figure 12.7 shows, consists of usually one but sometimes multiple processes running in a sandbox, which isolates it from other containers. A container running a Java service, for example, would typically consist of the JVM process.

图 12.7.容器由在隔离的沙箱中运行的一个或多个进程组成。多个容器通常在单个 机器。容器共享操作系统。

从容器中运行的进程的角度来看,它就像在自己的计算机上运行一样。它通常具有 拥有自己的 IP 地址,从而消除端口冲突。例如,所有 Java 进程都可以侦听端口 8080。每个容器 也有自己的根文件系统。容器运行时使用操作系统机制将容器与每个容器隔离开来 其他。容器运行时的最流行示例是 Docker,但还有其他示例,例如 Solaris Zones。

From the perspective of a process running in a container, it’s as if it’s running on its own machine. It typically has its own IP address, which eliminates port conflicts. All Java processes can, for example, listen on port 8080. Each container also has its own root filesystem. The container runtime uses operating system mechanisms to isolate the containers from each other. The most popular example of a container runtime is Docker, although there are others, such as Solaris Zones.

模式:将服务部署为容器

将打包为容器映像的服务部署到生产环境中。每个服务实例都是一个容器。请参阅 http://microservices.io/patterns/deployment/service-per-container.html

Deploy services packaged as container images into production. Each service instance is a container. See http://microservices.io/patterns/deployment/service-per-container.html.

创建容器时,您可以指定其 CPU、内存资源,并且根据容器实现,可以指定 I/O 资源。容器运行时会强制执行这些限制,并防止容器占用其 机器。当使用 Docker 编排框架(如 Kubernetes)时,指定容器的 资源。这是因为编排框架使用容器请求的资源来选择要运行的计算机 容器,从而确保机器不会超载。

When you create a container, you can specify its CPU, memory resources, and, depending on the container implementation, perhaps the I/O resources. The container runtime enforces these limits and prevents a container from hogging the resources of its machine. When using a Docker orchestration framework such as Kubernetes, it’s especially important to specify a container’s resources. That’s because the orchestration framework uses a container’s requested resources to select the machine to run the container and thereby ensure that machines aren’t overloaded.

将服务部署为容器的过程如图 12.8 所示。在构建时,部署管道使用容器镜像构建 工具,该工具读取服务的代码和镜像的描述,以创建容器镜像并将其存储在注册表中。 在运行时,从注册表中提取容器映像并用于创建容器。

Figure 12.8 shows the process of deploying a service as a container. At build-time, the deployment pipeline uses a container image-building tool, which reads the service’s code and a description of the image, to create the container image and stores it in a registry. At runtime, the container image is pulled from the registry and used to create containers.

图 12.8.服务打包为容器映像,该映像存储在注册表中。在运行时,该服务由多个容器组成 从该映像实例化。容器通常在虚拟机上运行。单个 VM 通常会运行多个容器。

让我们更详细地了解一下构建时和运行时步骤。

Let’s take a look at build-time and runtime steps in more detail.

12.3.1. 使用 Docker 部署服务

12.3.1. Deploying services using Docker

要将服务部署为容器,您必须将其打包为容器镜像。容器镜像是一个文件系统镜像,由应用程序和运行服务所需的任何软件组成。它通常是一个完整的 Linux root 文件系统,尽管也使用了更轻量级的图像。例如,要部署基于 Spring Boot 的服务,您需要构建 一个容器镜像,其中包含服务的可执行 JAR 和正确版本的 JDK。同样,要将 Java Web 应用程序,您将构建一个包含 WAR 文件、Apache Tomcat 和 JDK 的容器镜像。

To deploy a service as a container, you must package it as a container image. A container image is a filesystem image consisting of the application and any software required to run the service. It’s often a complete Linux root filesystem, although more lightweight images are also used. For example, to deploy a Spring Boot-based service, you build a container image containing the service’s executable JAR and the correct version of the JDK. Similarly, to deploy a Java web application, you would build a container image containing the WAR file, Apache Tomcat, and the JDK.

构建 Docker 镜像

构建镜像的第一步是创建 Dockerfile。Dockerfile 描述了如何构建 Docker 容器镜像。它指定了基本容器镜像,以及一系列的安装说明 软件并配置容器,以及在创建容器时运行的 shell 命令。清单 12.1 显示了用于构建镜像的 Dockerfile。它构建一个包含服务的可执行 JAR 文件的容器镜像。它将容器配置为在启动时运行命令。Restaurant Servicejava -jar

The first step in building an image is to create a Dockerfile. A Dockerfile describes how to build a Docker container image. It specifies the base container image, a series of instructions for installing software and configuring the container, and the shell command to run when the container is created. Listing 12.1 shows the Dockerfile used to build an image for Restaurant Service. It builds a container image containing the service’s executable JAR file. It configures the container to run the java -jar command on startup.

清单 12.1.用于构建的DockerfileRestaurant Service
FROM openjdk:8u171-jre-alpine                                              1
RUN apk --no-cache add curl                                                2
CMD java ${JAVA_OPTS} -jar ftgo-restaurant-service.jar                     3
HEALTHCHECK --start-period=30s --
     interval=5s CMD curl http://localhost:8080/actuator/health || exit 1  4
COPY build/libs/ftgo-restaurant-service.jar .                              5
FROM openjdk:8u171-jre-alpine                                              1
RUN apk --no-cache add curl                                                2
CMD java ${JAVA_OPTS} -jar ftgo-restaurant-service.jar                     3
HEALTHCHECK --start-period=30s --
     interval=5s CMD curl http://localhost:8080/actuator/health || exit 1  4
COPY build/libs/ftgo-restaurant-service.jar .                              5

  • 1 基础镜像
  • 1 The base image
  • 2 安装 curl 以供运行状况检查使用。
  • 2 Install curl for use by the health check.
  • 3 配置 Docker 以运行 java -jar ..当容器启动时。
  • 3 Configure Docker to run java -jar .. when the container is started.
  • 4 配置 Docker 以调用运行状况检查终端节点。
  • 4 Configure Docker to invoke the health check endpoint.
  • 5 将 Gradle 的 build 目录中的 JAR 复制到镜像中
  • 5 Copies the JAR in Gradle’s build directory into the image

基础映像是包含 JRE 的最小占用空间 Linux 映像。Dockerfile 将服务的 JAR 复制到镜像中,并配置 用于在启动时执行 JAR 的映像。它还将 Docker 配置为定期调用运行状况检查终端节点,如 在第 11 章中。该指令表示,在最初的 30 秒延迟后,每 5 秒调用一次运行状况检查终端节点 API,如第 11 章所述,这为服务提供了启动时间。openjdk:8u171-jre-alpineHEALTHCHECK

The base image openjdk:8u171-jre-alpine is a minimal footprint Linux image containing the JRE. The Dockerfile copies the service’s JAR into the image and configures the image to execute the JAR on startup. It also configures Docker to periodically invoke the health check endpoint, described in chapter 11. The HEALTHCHECK directive says to invoke the health check endpoint API, described in chapter 11, every 5 seconds after an initial 30-second delay, which gives the service time to start.

编写 后,您可以构建映像。以下清单显示了用于构建映像的 shell 命令。该脚本构建服务的 JAR 文件并执行命令以创建映像。DockerfileRestaurant Servicedocker build

Once you’ve written the Dockerfile, you can then build the image. The following listing shows the shell commands to build the image for Restaurant Service. The script builds the service’s JAR file and executes the docker build command to create the image.

清单 12.2.用于为其构建容器镜像的 shell 命令Restaurant Service
cd ftgo-restaurant-service                        1
../gradlew assemble                               2
docker build -t ftgo-restaurant-service .         3
cd ftgo-restaurant-service                        1
../gradlew assemble                               2
docker build -t ftgo-restaurant-service .         3

  • 1 更改服务的目录。
  • 1 Change to the service’s directory.
  • 2 构建服务的 JAR。
  • 2 Build the service’s JAR.
  • 3 构建镜像。
  • 3 Build the image.

该命令有两个参数:参数指定映像的名称,以及 Docker 对上下文的调用。上下文(在此示例中为当前目录)由用于生成映像的文件组成。该命令将上下文上传到 Docker 守护程序,后者将生成映像。docker build-t.Dockerfiledocker build

The docker build command has two arguments: the -t argument specifies the name of the image, and the . specifies what Docker calls the context. The context, which in this example is the current directory, consists of Dockerfile and the files used to build the image. The docker build command uploads the context to the Docker daemon, which builds the image.

将 Docker 镜像推送到注册表

构建过程的最后一步是将新构建的 Docker 镜像推送到所谓的注册表中。Docker 注册表等同于用于 Java 库的 Java Maven 存储库,或用于 NodeJS 包的 NodeJS npm 注册表。Docker 中心 是公共 Docker 注册表的一个示例,等效于 Maven Central 或 NpmJS.org。但对于您的应用程序,您可能希望使用服务提供的私有注册表,例如 Docker Cloud 注册表或 AWS EC2 容器注册表。

The final step of the build process is to push the newly built Docker image to what is known as a registry. A Docker registry is the equivalent of a Java Maven repository for Java libraries, or a NodeJS npm registry for NodeJS packages. Docker hub is an example of a public Docker registry and is equivalent to Maven Central or NpmJS.org. But for your applications you’ll probably want to use a private registry provided by services, such as Docker Cloud registry or AWS EC2 Container Registry.

您必须使用两个 Docker 命令将镜像推送到注册表。首先,使用命令为映像指定一个名称,该名称以注册表的主机名和可选端口为前缀。镜像名称也是 后缀为 version,这在您发布服务的新版本时非常重要。例如,如果 hostname 的注册表是 ,您可以使用以下命令来标记映像:docker tagregistry.acme.com

You must use two Docker commands to push an image to a registry. First, you use the docker tag command to give the image a name that’s prefixed with the hostname and optional port of the registry. The image name is also suffixed with the version, which will be important when you make a new release of the service. For example, if the hostname of the registry is registry.acme.com, you would use this command to tag the image:

docker tag ftgo-restaurant-service registry.acme.com/ftgo-restaurant-
     service:1.0.0.RELEASE
docker tag ftgo-restaurant-service registry.acme.com/ftgo-restaurant-
     service:1.0.0.RELEASE

接下来,使用命令将该标记的映像上传到注册表:docker push

Next you use the docker push command to upload that tagged image to the registry:

docker push registry.acme.com/ftgo-restaurant-service:1.0.0.RELEASE
docker push registry.acme.com/ftgo-restaurant-service:1.0.0.RELEASE

此命令花费的时间通常比您预期的要少得多。这是因为 Docker 镜像具有所谓的分层文件系统,这使得 Docker 只能通过网络传输部分镜像。映像的操作系统、Java 运行时和 应用程序位于不同的层中。Docker 只需要传输目标中不存在的那些层。如 因此,当 Docker 只需要移动应用程序的层时,通过网络传输镜像的速度非常快,这 是图像的一小部分。

This command often takes much less time than you might expect. That’s because a Docker image has what’s known as a layered file system, which enables Docker to only transfer part of the image over the network. An image’s operating system, Java runtime, and the application are in separate layers. Docker only needs to transfer those layers that don’t exist in the destination. As a result, transferring an image over a network is quite fast when Docker only has to move the application’s layers, which are a small fraction of the image.

现在,我们已将映像推送到注册表,让我们看看如何创建容器。

Now that we’ve pushed the image to a registry, let’s look at how to create a container.

运行 Docker 容器

将服务打包为容器映像后,您可以创建一个或多个容器。容器基础设施 将镜像从 registry 拉取到生产服务器上。然后,它将从该映像创建一个或多个容器。 每个容器都是您的服务的一个实例。

Once you’ve packaged your service as a container image, you can then create one or more containers. The container infrastructure will pull the image from the registry onto a production server. It will then create one or more containers from that image. Each container is an instance of your service.

如您所料,Docker 提供了一个用于创建和启动容器的命令。清单 12.3 显示了如何使用此命令来运行 .该命令具有多个参数,包括容器映像和要在运行时中设置的环境变量的规范 容器。这些用于传递外部化配置,例如数据库的网络位置等。docker runRestaurant Servicedocker run

As you might expect, Docker provides a docker run command that creates and starts a container. Listing 12.3 shows how to use this command to run Restaurant Service. The docker run command has several arguments, including the container image and a specification of environment variables to set in the runtime container. These are used to pass an externalized configuration, such as the database’s network location and more.

清单 12.3.用于运行容器化服务docker run
docker run \
  -d  \                                                               1
  --name ftgo-restaurant-service  \                                   2
  -p 8082:8080  \                                                     3
  -e SPRING_DATASOURCE_URL=... -e SPRING_DATASOURCE_USERNAME=...  \   4
  -e SPRING_DATASOURCE_PASSWORD=... \
  registry.acme.com/ftgo-restaurant-service:1.0.0.RELEASE             5
docker run \
  -d  \                                                               1
  --name ftgo-restaurant-service  \                                   2
  -p 8082:8080  \                                                     3
  -e SPRING_DATASOURCE_URL=... -e SPRING_DATASOURCE_USERNAME=...  \   4
  -e SPRING_DATASOURCE_PASSWORD=... \
  registry.acme.com/ftgo-restaurant-service:1.0.0.RELEASE             5

  • 1 将其作为后台守护程序运行
  • 1 Runs it as a background daemon
  • 2 容器的名称
  • 2 The name of the container
  • 3 将容器的 8080 端口绑定到主机的 8082 端口
  • 3 Binds port 8080 of the container to port 8082 of the host machine
  • 4 环境变量
  • 4 Environment variables
  • 5 要运行的图像
  • 5 Image to run

如有必要,该命令将从注册表中提取映像。然后,它会创建并启动容器,该容器将运行 .docker runjava -jarDockerfile

The docker run command pulls the image from the registry if necessary. It then creates and starts the container, which runs the java -jar command specified in the Dockerfile.

使用该命令可能看起来很简单,但存在几个问题。一个是这不是部署服务的可靠方法,因为它会创建一个在单台计算机上运行的容器。Docker 引擎提供 一些基本的管理功能,例如,如果容器崩溃或计算机重新启动,则自动重新启动容器。但 它不处理机器崩溃。docker rundocker run

Using the docker run command may seem simple, but there are a couple of problems. One is that docker run isn’t a reliable way to deploy a service, because it creates a container running on a single machine. The Docker engine provides some basic management features, such as automatically restarting containers if they crash or if the machine is rebooted. But it doesn’t handle machine crashes.

另一个问题是,服务通常不是孤立存在的。它们依赖于其他服务,例如数据库和 消息代理。最好将服务及其依赖项作为一个单元进行部署或取消部署。

Another problem is that services typically don’t exist in isolation. They depend on other services, such as databases and message brokers. It would be nice to deploy or undeploy a service and its dependencies as a unit.

在开发过程中特别有用的更好方法是使用 Docker Compose。Docker Compose 是一个工具,它允许 使用 YAML 文件以声明方式定义一组容器,然后将这些容器作为一个组启动和停止。什么是 more,YAML 文件是指定大量外部化配置属性的便捷方法。了解有关 Docker 的更多信息 Compose 中,我建议阅读 Jeff Nickoloff 的 Docker in Action(Manning,2016 年)并查看示例代码中的 docker-compose.yml 文件。

A better approach that’s especially useful during development is to use Docker Compose. Docker Compose is a tool that lets you declaratively define a set of containers using a YAML file, and then start and stop those containers as a group. What’s more, the YAML file is a convenient way to specify numerous externalized configuration properties. To learn more about Docker Compose, I recommend reading Docker in Action by Jeff Nickoloff (Manning, 2016) and looking at the docker-compose.yml file in the example code.

但是,Docker Compose 的问题在于它仅限于一台机器。要可靠地部署服务,您必须使用 Docker 编排框架,例如 Kubernetes,它将一组机器变成一个资源池。我描述一下 稍后在 12.4 节中使用 Kubernetes。首先,让我们回顾一下使用容器的优点和缺点。

The problem with Docker Compose, though, is that it’s limited to a single machine. To deploy services reliably, you must use a Docker orchestration framework, such as Kubernetes, which turns a set of machines into a pool of resources. I describe how to use Kubernetes later, in section 12.4. First, let’s review the benefits and drawbacks of using containers.

12.3.2. 将服务部署为容器的好处

12.3.2. Benefits of deploying services as containers

将服务部署为容器有几个好处。首先,容器具有虚拟机的许多优势:

Deploying services as containers has several benefits. First, containers have many of the benefits of virtual machines:

  • 技术堆栈的封装,其中用于管理服务的 API 成为容器 API。
  • Encapsulation of the technology stack in which the API for managing your services becomes the container API.
  • 服务实例是隔离的。
  • Service instances are isolated.
  • 服务实例的资源受到限制。
  • Service instances’s resources are constrained.

但与虚拟机不同的是,容器是一种轻量级技术。容器映像的构建速度通常很快。例如 在我的笔记本电脑上,只需 5 秒钟即可将 Spring Boot 应用程序打包为容器映像。通过网络移动容器镜像(例如移入和移出容器注册表)也相对较快,主要是因为只有一个子集 需要传输图像的图层。容器的启动速度也非常快,因为没有冗长的操作系统启动过程。 当容器启动时,运行的只是服务。

But unlike virtual machines, containers are a lightweight technology. Container images are typically fast to build. For example, on my laptop it takes as little as five seconds to package a Spring Boot application as a container image. Moving a container image over the network, such as to and from the container registry, is also relatively fast, primarily because only a subset of an image’s layers need to be transferred. Containers also start very quickly, because there’s no lengthy OS boot process. When a container starts, all that runs is the service.

12.3.3. 将服务部署为容器的缺点

12.3.3. Drawbacks of deploying services as containers

容器的一个显著缺点是您负责无差别的繁重管理 容器映像。您必须修补操作系统和运行时。此外,除非您使用的是托管容器解决方案 例如 Google Container Engine 或 AWS ECS,您必须管理容器基础架构,可能还需要管理 VM 基础架构 它继续运行。

One significant drawback of containers is that you’re responsible for the undifferentiated heavy lifting of administering the container images. You must patch the operating system and runtime. Also, unless you’re using a hosted container solution such as Google Container Engine or AWS ECS, you must administer the container infrastructure and possibly the VM infrastructure it runs on.

12.4. 使用 Kubernetes 部署 FTGO 应用程序

12.4. Deploying the FTGO application with Kubernetes

现在我们已经了解了容器及其权衡,让我们看看如何使用 Kubernetes 部署 FTGO 应用程序。第 12.3.1 节中描述的 Docker Compose 非常适合开发和测试。但是,要在生产环境中可靠地运行容器化服务,您需要使用更多的 复杂的容器运行时,例如 Kubernetes。Kubernetes 是一个 Docker 编排框架,是 Docker 的顶部,它将一组机器转换为用于运行服务的单个资源池。它努力保持 每个服务始终运行的期望实例数,即使服务实例或计算机崩溃也是如此。敏捷性 的容器与 Kubernetes 的复杂性相结合,是一种引人注目的服务部署方式。Restaurant Service

Now that we’ve looked at containers and their trade-offs, let’s look at how to deploy the FTGO application’s Restaurant Service using Kubernetes. Docker Compose, described in section 12.3.1, is great for development and testing. But to reliably run containerized services in production, you need to use a much more sophisticated container runtime, such as Kubernetes. Kubernetes is a Docker orchestration framework, a layer of software on top of Docker that turns a set of machines into a single pool of resources for running services. It endeavors to keep the desired number of instances of each service running at all times, even when service instances or machines crash. The agility of containers combined with the sophistication of Kubernetes is a compelling way to deploy services.

在本节中,我首先概述了 Kubernetes、其功能和架构。之后,我将展示如何 使用 Kubernetes 部署服务。Kubernetes 是一个复杂的主题,详尽地介绍它超出了本文的范围 这本书,所以我只从开发人员的角度展示如何使用 Kubernetes。有关更多信息,我推荐 Marko Luksa 的 Kubernetes in Action(Manning,2018 年)。

In this section, I first give an overview of Kubernetes, its functionality, and its architecture. After that, I show how to deploy a service using Kubernetes. Kubernetes is a complex topic, and covering it exhaustively is beyond the scope of this book, so I only show how to use Kubernetes from the perspective of a developer. For more information, I recommend Kubernetes in Action by Marko Luksa (Manning, 2018).

12.4.1. Kubernetes 概述

12.4.1. Overview of Kubernetes

Kubernetes 是一个 Docker 编排框架。Docker 编排框架将一组运行 Docker 的计算机视为资源池。您告诉 Docker 编排框架运行您的服务的 N 个实例,它由它处理其余部分。图 12.9 显示了 Docker 编排框架的架构。

Kubernetes is a Docker orchestration framework. A Docker orchestration framework treats a set of machines running Docker as a pool of resources. You tell the Docker orchestration framework to run N instances of your service, and it handles the rest. Figure 12.9 shows the architecture of a Docker orchestration framework.

图 12.9.Docker 编排框架将一组运行 Docker 的机器转换为资源集群。它分配容器 到机器。框架会尝试始终保持所需数量的正常运行的容器。

Docker 编排框架(如 Kubernetes)具有三个主要功能:

A Docker orchestration framework, such as Kubernetes, has three main functions:

  • 资源管理将计算机集群视为 CPU、内存和存储卷的池,将计算机集合转换为单个 机器。
  • Resource managementTreats a cluster of machines as a pool of CPU, memory, and storage volumes, turning the collection of machines into a single machine.
  • 调度 - 选择要运行容器的计算机。默认情况下,调度会考虑容器的资源需求,并且 每个节点的可用资源。它还可能实现关联性(将容器放在同一节点上)和反关联性(将容器放置在不同的节点上)。
  • SchedulingSelects the machine to run your container. By default, scheduling considers the resource requirements of the container and each node’s available resources. It might also implement affinity, which colocates containers on the same node, and anti-affinity, which places containers on different nodes.
  • 服务管理实现直接映射到微服务架构中的服务的命名和版本控制服务的概念。这 Orchestration Framework 可确保所需数量的运行状况良好的实例始终运行。它对请求进行负载均衡 跨越他们。编排框架执行服务的滚动升级,并允许您回滚到旧版本。
  • Service managementImplements the concept of named and versioned services that map directly to services in the microservice architecture. The orchestration framework ensures that the desired number of healthy instances is running at all times. It load balances requests across them. The orchestration framework performs rolling upgrades of services and lets you roll back to an old version.

Docker 编排框架是一种越来越流行的应用程序部署方式。Docker Swarm 是 Docker 的一部分 引擎,因此易于设置和使用。Kubernetes 的设置和管理要复杂得多,但也要复杂得多。 在撰写本文时,Kubernetes 拥有巨大的发展势头,拥有庞大的开源社区。让我们仔细看看 了解它是如何工作的。

Docker orchestration frameworks are an increasingly popular way to deploy applications. Docker Swarm is part of the Docker engine, so is easy to set up and use. Kubernetes is much more complex to set up and administer, but it’s much more sophisticated. At the time of writing, Kubernetes has tremendous momentum, with a massive open source community. Let’s take a closer look at how it works.

Kubernetes 架构

Kubernetes 在计算机集群上运行。图 12.10 显示了 Kubernetes 集群的架构。Kubernetes 集群中的每台计算机要么是主计算机,要么是节点。典型的 cluster 具有少量 Master(可能只有一个)和许多节点。计算机负责管理集群。节点是运行一个或多个 Pod 的工作线程。Pod 是 Kubernetes 的部署单元,由一组容器组成。

Kubernetes runs on a cluster of machines. Figure 12.10 shows the architecture of a Kubernetes cluster. Each machine in a Kubernetes cluster is either a master or a node. A typical cluster has a small number of masters—perhaps just one—and many nodes. A master machine is responsible for managing the cluster. A node is a worker than runs one or more pods. A pod is Kubernetes’s unit of deployment and consists of a set of containers.

图 12.10.Kubernetes 集群由管理集群的主节点和运行服务的节点组成。Developers 和 部署管道通过 API 服务器与 Kubernetes 交互,该服务器与其他集群管理软件一起运行 在主服务器上。应用程序容器在节点上运行。每个节点都运行一个 Kubelet,用于管理应用程序容器,并且 kube-proxy,将应用程序请求直接作为代理或通过配置 iptables 间接路由到 Pod Linux 内核中内置的路由规则。

主节点运行多个组件,包括:

A master runs several components, including the following:

  • API 服务器例如,用于部署和管理服务的 REST API,由命令行界面使用。kubectl
  • API serverThe REST API for deploying and managing services, used by the kubectl command-line interface, for example.
  • Etcd存储集群数据的键值 NoSQL 数据库。
  • EtcdA key-value NoSQL database that stores the cluster data.
  • Scheduler (调度程序) - 选择要运行 Pod 的节点。
  • SchedulerSelects a node to run a pod.
  • 控制器管理器运行控制器,确保集群的状态与预期状态匹配。例如,一种类型的控制器 称为复制控制器,可确保通过启动和终止实例来运行所需数量的服务实例。
  • Controller managerRuns the controllers, which ensure that the state of the cluster matches the intended state. For example, one type of controller known as a replication controller ensures that the desired number of instances of a service are running by starting and terminating instances.

一个节点运行多个组件,包括:

A node runs several components, including the following:

  • Kubelet - 创建和管理节点上运行的 Pod
  • KubeletCreates and manages the pods running on the node
  • kube-proxy管理网络,包括跨 Pod 的负载均衡
  • Kube-proxyManages networking, including load balancing across pods
  • Pods应用程序服务
  • PodsThe application services

现在让我们看看在 Kubernetes 上部署服务时需要掌握的关键 Kubernetes 概念。

Let’s now look at key Kubernetes concepts you’ll need to master to deploy services on Kubernetes.

关键 Kubernetes 概念

如本节简介中所述,Kubernetes 相当复杂。但可以高效地使用 Kubernetes 一旦你掌握了几个关键概念,称为 Objects。Kubernetes 定义了许多类型的对象。从开发人员的角度来看,最重要的对象如下:

As mentioned in the introduction to this section, Kubernetes is quite complex. But it’s possible to use Kubernetes productively once you master a few key concepts, called objects. Kubernetes defines many types of objects. From a developer’s perspective, the most important objects are the following:

  • Pod - Pod 是 Kubernetes 中的基本部署单元。它由一个或多个共享 IP 地址和存储的容器组成 卷。服务实例的 Pod 通常由单个容器组成,例如运行 JVM 的容器。但在 在某些情况下,Pod 包含一个或多个 sidecar 容器,这些容器实现支持功能。例如,NGINX 服务器可能有一个定期执行 a 下载最新版本的网站。Pod 是短暂的,因为 Pod 的容器或它正在运行的节点 on 可能会崩溃。git pull
  • PodA pod is the basic unit of deployment in Kubernetes. It consists of one or more containers that share an IP address and storage volumes. The pod for a service instance often consists of a single container, such as a container running the JVM. But in some scenarios a pod contains one or more sidecar containers, which implement supporting functions. For example, an NGINX server could have a sidecar that periodically does a git pull to download the latest version of the website. A pod is ephemeral, because either the pod’s containers or the node it’s running on might crash.
  • 部署 - Pod 的声明性规范。部署是一个控制器,它确保所需数量的 Pod(服务实例)始终运行。它支持通过滚动升级和回滚进行版本控制。在后面的 12.4.2 节中,您将看到微服务架构中的每个服务都是一个 Kubernetes 部署。
  • DeploymentA declarative specification of a pod. A deployment is a controller that ensures that the desired number of instances of the pod (service instances) are running at all times. It supports versioning with rolling upgrades and rollbacks. Later in section 12.4.2, you’ll see that each service in a microservice architecture is a Kubernetes deployment.
  • 服务为应用程序服务的客户端提供静态/稳定的网络位置。它是基础设施提供的服务的一种形式 发现,如第 3 章所述。服务具有一个 IP 地址和一个 DNS 名称,该名称解析为该 IP 地址,并在 TCP 和 UDP 流量之间进行负载均衡 一个或多个 pod。IP 地址和 DNS 名称只能在 Kubernetes 中访问。稍后,我将介绍如何配置 可从集群外部访问的服务。
  • ServiceProvides clients of an application service with a static/stable network location. It’s a form of infrastructure-provided service discovery, described in chapter 3. A service has an IP address and a DNS name that resolves to that IP address and load balances TCP and UDP traffic across one or more pods. The IP address and a DNS name are only accessible within the Kubernetes. Later, I describe how to configure services that are accessible from outside the cluster.
  • ConfigMap名称-值对的命名集合,用于定义一个或多个应用程序服务的外部化配置(有关外部化配置的概述,请参见第 11 章)。Pod 容器的定义可以引用 ConfigMap 来定义 容器的环境变量。它还可以使用 ConfigMap 在容器内创建配置文件。您可以 以称为 Secret 的 ConfigMap 形式存储敏感信息(如密码)。
  • ConfigMapA named collection of name-value pairs that defines the externalized configuration for one or more application services (see chapter 11 for an overview of externalized configuration). The definition of a pod’s container can reference a ConfigMap to define the container’s environment variables. It can also use a ConfigMap to create configuration files inside the container. You can store sensitive information, such as passwords, in a form of ConfigMap called a Secret.

现在我们已经回顾了关键的 Kubernetes 概念,让我们通过查看如何部署应用程序服务来了解它们的实际应用 在 Kubernetes 上。

Now that we’ve reviewed the key Kubernetes concepts, let’s see them in action by looking at how to deploy an application service on Kubernetes.

12.4.2. 在 Kubernetes 上部署 Restaurant 服务

12.4.2. Deploying the Restaurant service on Kubernetes

如前所述,要在 Kubernetes 上部署服务,您需要定义一个部署。创建 Kubernetes 的最简单方法 对象(如 deployment)是通过编写 YAML 文件来实现的。清单 12.4 是一个 YAML 文件,用于定义 .此部署指定运行 Pod 的两个副本。Pod 只有一个容器。容器定义指定 与其他属性(如环境变量的值)一起运行的 Docker 镜像。容器的环境 variables 是服务的外部化配置。它们由 Spring Boot 读取,并作为 应用程序上下文。Restaurant Service

As mentioned earlier, to deploy a service on Kubernetes, you need to define a deployment. The easiest way to create a Kubernetes object such as a deployment is by writing a YAML file. Listing 12.4 is a YAML file defining a deployment for Restaurant Service. This deployment specifies running two replicas of a pod. The pod has just one container. The container definition specifies the Docker image running along with other attributes, such as the values of environment variables. The container’s environment variables are the service’s externalized configuration. They are read by Spring Boot and made available as properties in the application context.

清单 12.4.Kubernetes 用于Deploymentftgo-restaurant-service
apiVersion: extensions/v1beta1
kind: Deployment                                               1
 metadata:
  name: ftgo-restaurant-service                                2
 spec:
  replicas: 2                                                  3
   template:
    metadata:
      labels:
        app: ftgo-restaurant-service                           4
     spec:                                                     5
       containers:
       - name: ftgo-restaurant-service
         image: msapatterns/ftgo-restaurant-service:latest
         imagePullPolicy: Always
         ports:
         - containerPort: 8080                                 6
           name: httpport
         env:                                                  7
           - name: JAVA_OPTS
             value: "-Dsun.net.inetaddr.ttl=30"
           - name: SPRING_DATASOURCE_URL
             value: jdbc:mysql://ftgo-mysql/eventuate
           - name: SPRING_DATASOURCE_USERNAME
             valueFrom:
               secretKeyRef:
                 name: ftgo-db-secret
                 key: username
           - name: SPRING_DATASOURCE_PASSWORD
             valueFrom:
               secretKeyRef:
                 name: ftgo-db-secret                          8
                 key: password
           - name: SPRING_DATASOURCE_DRIVER_CLASS_NAME
             value: com.mysql.jdbc.Driver
           - name: EVENTUATELOCAL_KAFKA_BOOTSTRAP_SERVERS
             value: ftgo-kafka:9092
           - name: EVENTUATELOCAL_ZOOKEEPER_CONNECTION_STRING
             value: ftgo-zookeeper:2181
         livenessProbe:                                        9
           httpGet:
             path: /actuator/health
             port: 8080
           initialDelaySeconds: 60
           periodSeconds: 20
         readinessProbe:
           httpGet:
             path: /actuator/health
             port: 8080
           initialDelaySeconds: 60
           periodSeconds: 20
apiVersion: extensions/v1beta1
kind: Deployment                                               1
 metadata:
  name: ftgo-restaurant-service                                2
 spec:
  replicas: 2                                                  3
   template:
    metadata:
      labels:
        app: ftgo-restaurant-service                           4
     spec:                                                     5
       containers:
       - name: ftgo-restaurant-service
         image: msapatterns/ftgo-restaurant-service:latest
         imagePullPolicy: Always
         ports:
         - containerPort: 8080                                 6
           name: httpport
         env:                                                  7
           - name: JAVA_OPTS
             value: "-Dsun.net.inetaddr.ttl=30"
           - name: SPRING_DATASOURCE_URL
             value: jdbc:mysql://ftgo-mysql/eventuate
           - name: SPRING_DATASOURCE_USERNAME
             valueFrom:
               secretKeyRef:
                 name: ftgo-db-secret
                 key: username
           - name: SPRING_DATASOURCE_PASSWORD
             valueFrom:
               secretKeyRef:
                 name: ftgo-db-secret                          8
                 key: password
           - name: SPRING_DATASOURCE_DRIVER_CLASS_NAME
             value: com.mysql.jdbc.Driver
           - name: EVENTUATELOCAL_KAFKA_BOOTSTRAP_SERVERS
             value: ftgo-kafka:9092
           - name: EVENTUATELOCAL_ZOOKEEPER_CONNECTION_STRING
             value: ftgo-zookeeper:2181
         livenessProbe:                                        9
           httpGet:
             path: /actuator/health
             port: 8080
           initialDelaySeconds: 60
           periodSeconds: 20
         readinessProbe:
           httpGet:
             path: /actuator/health
             port: 8080
           initialDelaySeconds: 60
           periodSeconds: 20

  • 1 指定这是 Deployment 类型的对象
  • 1 Specifies that this is an object of type Deployment
  • 2 部署的名称
  • 2 The name of the deployment
  • 3 Pod 副本数
  • 3 Number of pod replicas
  • 4 为每个 Pod 提供一个名为 app 的标签,其值为 ftgo-restaurant-service
  • 4 Gives each pod a label called app whose value is ftgo-restaurant-service
  • 5 Pod 的规范,仅定义一个容器
  • 5 The specification of the pod, which defines just one container
  • 6 容器的端口
  • 6 The container’s port
  • 7 容器的环境变量,由 Spring Boot 读取
  • 7 The container’s environment variables, which are read by Spring Boot
  • 8 从 Kubernetes 密钥(称为 ftgo-db-secret)检索的敏感值
  • 8 Sensitive values that are retrieved from the Kubernetes Secret called ftgo-db-secret
  • 9 配置 Kubernetes 以调用运行状况检查终端节点。
  • 9 Configure Kubernetes to invoke the health check endpoint.

此部署定义将 Kubernetes 配置为调用 的运行状况检查终端节点。如第 11 章所述,运行状况检查终端节点使 Kubernetes 能够确定服务实例的运行状况。Kubernetes 实现了两种不同的 检查。第一个检查是 ,它用于确定是否应将流量路由到服务实例。在此示例中,Kubernetes 在最初的 30 秒延迟后每 20 秒调用一次 HTTP 终端节点,这使其有机会进行初始化。如果某个数字(默认 是 1) 连续成功,则 Kubernetes 认为服务已准备就绪,而如果连续失败的次数(默认为 3)则认为服务未就绪。Kubernetes 只会在指示服务实例准备就绪时将流量路由到服务实例。Restaurant ServicereadinessProbe/actuator/healthreadinessProbesreadinessProbesreadinessProbe

This deployment definition configures Kubernetes to invoke Restaurant Service’s health check endpoint. As described in chapter 11, a health check endpoint enables Kubernetes to determine the health of the service instance. Kubernetes implements two different checks. The first check is readinessProbe, which it uses to determine whether it should route traffic to a service instance. In this example, Kubernetes invokes the /actuator/health HTTP endpoint every 20 seconds after an initial 30-second delay, which gives it a chance to initialize. If some number (default is 1) of consecutive readinessProbes succeeds, Kubernetes considers the service to be ready, whereas if some number (default, 3) of consecutive readinessProbes fail, it’s considered not to be ready. Kubernetes will only route traffic to the service instance when the readinessProbe indicates that it’s ready.

第二个运行状况检查是 .它的配置方式与 .但是,不是确定是否应将流量路由到服务实例,而是确定 Kubernetes 是否应终止并重新启动服务实例。如果连续失败了 3 次,Kubernetes 将终止并重启服务。livenessProbereadinessProbelivenessProbelivenessProbes

The second health check is the livenessProbe. It’s configured the same way as the readinessProbe. But rather than determine whether traffic should be routed to a service instance, the livenessProbe determines whether Kubernetes should terminate and restart the service instance. If some number (default, 3) of consecutive livenessProbes fail in a row, Kubernetes will terminate and restart the service.

编写 YAML 文件后,您可以使用以下命令创建或更新部署:kubectl apply

Once you’ve written the YAML file, you can create or update the deployment by using the kubectl apply command:

kubectl apply -f ftgo-restaurant-service/src/deployment/kubernetes/ftgo-
     restaurant-service.yml
kubectl apply -f ftgo-restaurant-service/src/deployment/kubernetes/ftgo-
     restaurant-service.yml

此命令向 Kubernetes API 服务器发出请求,从而创建部署和 Pod。

This command makes a request to the Kubernetes API server that results in the creation of the deployment and the pods.

要创建此部署,您必须首先创建名为 的 Kubernetes 密钥。一种快速且不安全的方法可以做到这一点:ftgo-db-secret

To create this deployment, you must first create the Kubernetes Secret called ftgo-db-secret. One quick and insecure way to do that is as follows:

kubectl create secret generic ftgo-db-secret \
  --from-literal=username=mysqluser --from-literal=password=mysqlpw
kubectl create secret generic ftgo-db-secret \
  --from-literal=username=mysqluser --from-literal=password=mysqlpw

此命令将创建一个密钥,其中包含在命令行上指定的数据库用户 ID 和密码。查看 Kubernetes 文档 (https://kubernetes.io/docs/concepts/configuration/secret/#creating-your-own-secrets) 了解更安全的 secret 创建方法。

This command creates a secret containing the database user ID and password specified on the command line. See the Kubernetes documentation (https://kubernetes.io/docs/concepts/configuration/secret/#creating-your-own-secrets) for more secure ways to create secrets.

创建 Kubernetes 服务

此时,Pod 正在运行,Kubernetes 部署将尽最大努力保持它们运行。问题是 Pod 具有动态分配的 IP 地址,因此,对于想要发出 HTTP 请求的客户端来说,它不是那么有用。 如第 3 章所述,解决方案是使用服务发现机制。一种方法是使用客户端发现机制并安装 服务注册表,例如 Netflix OSS Eureka。幸运的是,我们可以通过使用服务发现机制来避免这种情况 内置于 Kubernetes 中并定义 Kubernetes 服务。

At this point the pods are running, and the Kubernetes deployment will do its best to keep them running. The problem is that the pods have dynamically assigned IP addresses and, as such, aren’t that useful to a client that wants to make an HTTP request. As described in chapter 3, the solution is to use a service discovery mechanism. One approach is to use a client-side discovery mechanism and install a service registry, such as Netflix OSS Eureka. Fortunately, we can avoid doing that by using the service discovery mechanism built in to Kubernetes and define a Kubernetes service.

服务是一个 Kubernetes 对象,它为一个或多个 Pod 的客户端提供稳定的终端节点。它有一个 IP 地址和一个 DNS name 解析该 IP 地址。该服务在 Pod 之间对流向该 IP 地址的流量进行负载均衡。清单 12.5 显示了 的 Kubernetes 服务。此服务将流量路由到清单中所示的部署定义的 Pod。Restaurant Servicehttp://ftgo-restaurant-service:8080

A service is a Kubernetes object that provides the clients of one or more pods with a stable endpoint. It has an IP address and a DNS name that resolves that IP address. The service load balances traffic to that IP address across the pods. Listing 12.5 shows the Kubernetes service for Restaurant Service. This service routes traffic from http://ftgo-restaurant-service:8080 to the pods defined by the deployment shown in the listing.

清单 12.5.的 Kubernetes 服务的 YAML 定义ftgo-restaurant-service
apiVersion: v1
kind: Service
metadata:
  name: ftgo-restaurant-service          1
 spec:
  ports:
  - port: 8080                           2
     targetPort: 8080                    3
  selector:
    app: ftgo-restaurant-service         4
 ---
apiVersion: v1
kind: Service
metadata:
  name: ftgo-restaurant-service          1
 spec:
  ports:
  - port: 8080                           2
     targetPort: 8080                    3
  selector:
    app: ftgo-restaurant-service         4
 ---

  • 1 服务的名称,也是 DNS 名称
  • 1 The name of the service, also the DNS name
  • 2 暴露的端口
  • 2 The exposed port
  • 3 要将流量路由到的集装箱端口
  • 3 The container port to route traffic to
  • 4 选择要将流量路由到的容器
  • 4 Selects the containers to route traffic to

服务定义的关键部分是 ,它选择目标 Pod。它会选择那些标签名为 .如果你仔细观察,你会发现清单 12.4 中定义的容器有这样一个标签。selectorappftgo-restaurant-service

The key part of the service definition is selector, which selects the target pods. It selects those pods that have a label named app with the value ftgo-restaurant-service. If you look closely, you’ll see that the container defined in listing 12.4 has such a label.

编写 YAML 文件后,您可以使用以下命令创建服务:

Once you’ve written the YAML file, you can create the service using this command:

kubectl apply -f ftgo-restaurant-service-service.yml
kubectl apply -f ftgo-restaurant-service-service.yml

现在我们已经创建了 Kubernetes 服务,在 Kubernetes 集群中运行的任何客户端都可以通过以下方式访问其 REST API。稍后,我将讨论如何升级正在运行的服务,但首先让我们看一下如何使服务可从 在 Kubernetes 集群之外。Restaurant Servicehttp://ftgo-restaurant-service:8080

Now that we’ve created the Kubernetes service, any clients of Restaurant Service that are running inside the Kubernetes cluster can access its REST API via http://ftgo-restaurant-service:8080. Later, I discuss how to upgrade running services, but first let’s take a look at how to make the services accessible from outside the Kubernetes cluster.

12.4.3. 部署 API 网关

12.4.3. Deploying the API gateway

清单 12.5 中所示的 Kubernetes 服务只能从集群内部访问。这对 来说不是问题,但呢?它的作用是将流量从外部世界路由到服务。因此,它需要可以从 簇。幸运的是,Kubernetes 服务也支持此用例。我们之前看到的服务是一项服务,这是默认服务,但还有另外两种类型的服务:和 。Restaurant ServiceRestaurant ServiceAPI GatewayClusterIPNodePortLoadBalancer

The Kubernetes service for Restaurant Service, shown in listing 12.5, is only accessible from within the cluster. That’s not a problem for Restaurant Service, but what about API Gateway? Its role is to route traffic from the outside world to the service. It therefore needs to be accessible from outside the cluster. Fortunately, a Kubernetes service supports this use case as well. The service we looked at earlier is a ClusterIP service, which is the default, but there are, however, two other types of services: NodePort and LoadBalancer.

可以通过集群中所有节点上的集群范围端口访问服务。在任何集群节点上流向该端口的任何流量 将负载均衡到后端 Pod。您必须在 30000–32767 范围内选择一个可用端口。例如,清单 12.6 显示了将流量路由到 的端口 30000 的服务。NodePortConsumer Service

A NodePort service is accessible via a cluster-wide port on all the nodes in the cluster. Any traffic to that port on any cluster node is load balanced to the backend pods. You must select an available port in the range of 30000–32767. For example, listing 12.6 shows a service that routes traffic to port 30000 of Consumer Service.

清单 12.6.将流量路由到端口 8082 的服务的 YAML 定义NodePortConsumer Service
apiVersion: v1
kind: Service
metadata:
  name: ftgo-api-gateway
spec:
  type: NodePort              1
  ports:
  - nodePort: 30000           2
     port: 80
    targetPort: 8080
  selector:
    app: ftgo-api-gateway
---
apiVersion: v1
kind: Service
metadata:
  name: ftgo-api-gateway
spec:
  type: NodePort              1
  ports:
  - nodePort: 30000           2
     port: 80
    targetPort: 8080
  selector:
    app: ftgo-api-gateway
---

  • 1 指定 NodePort 的类型
  • 1 Specifies a type of NodePort
  • 2 集群范围的端口
  • 2 The cluster-wide port

API Gateway在群集内使用 URL 并在 URL 外部,其中 是其中一个节点的 IP 地址。例如,在配置服务后,您可以配置 AWS Elastic Load Balancer (ELB) 以对来自 Internet 的请求进行负载均衡 节点。此方法的一个主要优点是 ELB 完全在您的控制之下。在以下情况下,您可以完全灵活地使用 配置它。http://ftgo-api-gatewayhttp://<node-ip-address>:3000/node-ip-addressNodePort

API Gateway is within the cluster using the URL http://ftgo-api-gateway and outside the URL http://<node-ip-address>:3000/, where node-ip-address is the IP address of one of the nodes. After configuring a NodePort service you can, for example, configure an AWS Elastic Load Balancer (ELB) to load balance requests from the internet across the nodes. A key benefit of this approach is that the ELB is entirely under your control. You have complete flexibility when configuring it.

不过,类型服务并不是唯一的选择。您还可以使用服务,该服务会自动配置特定于云的负载均衡器。如果 Kubernetes 是 在 AWS 上运行。此类服务的一个好处是,您不再需要配置自己的负载均衡器。缺点, 但是,尽管 Kubernetes 确实提供了一些配置 ELB 的选项(例如 SSL 证书),但您有一个 对其配置的控制要少得多。NodePortLoadBalancer

A NodePort type service isn’t the only option, though. You can also use a LoadBalancer service, which automatically configures a cloud-specific load balancer. The load balancer will be an ELB if Kubernetes is running on AWS. One benefit of this type of service is that you no longer have to configure your own load balancer. The drawback, however, is that although Kubernetes does give a few options for configuring the ELB, such the SSL certificate, you have a lot less control over its configuration.

12.4.4. 零停机时间部署

12.4.4. Zero-downtime deployments

假设您已经更新了并希望将这些更改部署到生产环境中。使用 Kubernetes 时,更新正在运行的服务是一个简单的三步过程:Restaurant Service

Imagine you’ve updated Restaurant Service and want to deploy those changes into production. Updating a running service is a simple three-step process when using Kubernetes:

  1. 构建新的容器镜像,并使用前面描述的相同过程将其推送到注册表。唯一的区别是 图像将使用不同的版本标记进行标记,例如 .ftgo-restaurant-service:1.1.0.RELEASE
  2. Build a new container image and push it to the registry using the same process described earlier. The only difference is that the image will be tagged with a different version tag—for example, ftgo-restaurant-service:1.1.0.RELEASE.
  3. 编辑服务部署的 YAML 文件,以便它引用新映像。
  4. Edit the YAML file for the service’s deployment so that it references the new image.
  5. 使用命令更新部署。kubectl apply -f
  6. Update the deployment using the kubectl apply -f command.

然后,Kubernetes 将执行 Pod 的滚动升级。它将以增量方式创建 运行 version 的 Pod 并终止运行 version 的 Pod 。Kubernetes 这样做的伟大之处在于,它不会终止旧的 Pod,直到它们的替代品准备好为止 处理请求。它使用该机制(本节前面介绍的一种运行状况检查机制)来确定 Pod 是否已准备就绪。因此, 总会有 Pod 可用于处理请求。最终,假设新 Pod 成功启动,所有 Deployment 的 Pod 将运行新版本。1.1.0.RELEASE1.0.0.RELEASEreadinessProbe

Kubernetes will then perform a rolling upgrade of the pods. It will incrementally create pods running version 1.1.0.RELEASE and terminate the pods running version 1.0.0.RELEASE. What’s great about how Kubernetes does this is that it doesn’t terminate old pods until their replacements are ready to handle requests. It uses the readinessProbe mechanism, a health check mechanism described earlier in this section, to determine whether a pod is ready. As a result, there will always be pods available to handle requests. Eventually, assuming the new pods start successfully, all the deployment’s pods will be running the new version.

但是,如果出现问题并且版本 Pod 未启动怎么办?可能存在 bug,例如容器映像名称拼写错误或缺少 新的 configuration 属性。如果 Pod 无法启动,则 Deployment 将卡住。此时,您有两个选择。 一种选择是修复 YAML 文件并重新运行以更新部署。另一个选项是回滚部署。1.1.0.RELEASEkubectl apply -f

But what if there’s a problem and the version 1.1.0.RELEASE pods don’t start? Perhaps there’s a bug, such as a misspelled container image name or a missing environment variable for a new configuration property. If the pods fail to start, the deployment will become stuck. At that point, you have two options. One option is to fix the YAML file and rerun kubectl apply -f to update the deployment. The other option is to roll back the deployment.

部署维护所谓的推出的历史记录。每次更新部署时,它都会创建一个新的推出。因此,您可以轻松地将部署回滚到以前的 版本:

A deployment maintains the history of what are termed rollouts. Each time you update the deployment, it creates a new rollout. As a result, you can easily roll back a deployment to a previous version by executing the following command:

kubectl rollout undo deployment ftgo-restaurant-service
kubectl rollout undo deployment ftgo-restaurant-service

然后,Kubernetes 会将运行版本的 Pod 替换为运行旧版本的 Pod。1.1.0.RELEASE1.0.0.RELEASE

Kubernetes will then replace the pods running version 1.1.0.RELEASE with pods running the older version, 1.0.0.RELEASE.

Kubernetes 部署是在不停机的情况下部署服务的好方法。但是,如果一个 bug 只在 Pod 之后出现呢 准备就绪并接收生产流量?在这种情况下,Kubernetes 将继续推出新版本,因此 用户数量将受到影响。尽管您的监控系统有望检测到问题并快速回滚 部署,您至少不会避免影响某些用户。解决此问题并推出服务的新版本 更可靠时,我们需要将部署(即让服务在生产环境中运行)与发布服务(即使其可用于处理生产流量)分开。让我们看看如何使用服务来实现这一点 网孔。

A Kubernetes deployment is a good way to deploy a service without downtime. But what if a bug only appears after the pod is ready and receiving production traffic? In that situation, Kubernetes will continue to roll out new versions, so a growing number of users will be impacted. Though your monitoring system will hopefully detect the issue and quickly roll back the deployment, you won’t avoid impacting at least some users. To address this issue and make rolling out a new version of a service more reliable, we need to separate deploying, which means getting the service running in production, from releasing the service, which means making it available to handle production traffic. Let’s look at how to accomplish that using a service mesh.

12.4.5. 使用服务网格将部署与发布分开

12.4.5. Using a service mesh to separate deployment from release

推出新版服务的传统方法是首先在暂存环境中对其进行测试。然后,一旦它通过 暂存测试,您可以通过执行滚动升级(将服务的旧实例替换为新实例)在生产环境中进行部署 service 实例。一方面,正如您刚才所看到的,Kubernetes 部署使滚动升级变得非常简单。 另一方面,这种方法假设一旦服务版本通过了暂存环境中的测试,它将 在生产中工作。可悲的是,情况并非总是如此。

The traditional way to roll out a new version of a service is to first test it in a staging environment. Then, once it’s passed the test in staging, you deploy in production by doing a rolling upgrade that replaces old instances of the service with new service instances. On one hand, as you just saw, Kubernetes deployments make doing a rolling upgrade very straightforward. On the other hand, this approach assumes that once a service version has passed the tests in the staging environment, it will work in production. Sadly, this is not always the case.

一个原因是,暂存不太可能是精确的克隆,如果除了生产环境之外没有其他原因 可能会更大并处理更多的流量。保持两个环境同步也很耗时。 由于差异,某些 bug 可能只会出现在生产环境中。即使它是一个精确的克隆, 你不能保证测试会捕获所有错误。

One reason is because staging is unlikely to be an exact clone, if for no other reason than the production environment is likely to be much larger and handle much more traffic. It’s also time consuming to keep the two environments synchronized. As a result of discrepancies, it’s likely that some bugs will only show up in production. And even it were an exact clone, you can’t guarantee that testing will catch all bugs.

推出新版本的一种更可靠的方法是将部署与发布分开:

A much more reliable way to roll out a new version is to separate deployment from release:

  • 部署 - 在生产环境中运行
  • DeploymentRunning in the production environment
  • 发布服务使其可供最终用户使用
  • Releasing a serviceMaking it available to end users

然后,使用以下步骤将服务部署到生产环境中:

You then deploy a service into production using the following steps:

  1. 将新版本部署到生产环境中,而无需向其路由任何最终用户请求。
  2. Deploy the new version into production without routing any end-user requests to it.
  3. 在生产环境中进行测试。
  4. Test it in production.
  5. 将其发布给少数最终用户。
  6. Release it to a small number of end users.
  7. 以增量方式将其释放给越来越多的用户,直到它处理所有生产流量。
  8. Incrementally release it to an increasingly larger number of users until it’s handling all the production traffic.
  9. 如果在任何时候出现问题,请恢复到旧版本,否则,一旦您确信新版本可以正常工作 正确地,删除旧版本。
  10. If at any point there’s an issue, revert back to the old version—otherwise, once you’re confident the new version is working correctly, delete the old version.

理想情况下,这些步骤将由一个完全自动化的部署管道执行,该管道会仔细监控新部署的 服务错误。

Ideally, those steps will be performed by a fully automated deployment pipeline that carefully monitors the newly deployed service for errors.

传统上,以这种方式分离部署和发布一直具有挑战性,因为它需要大量工作来实现 它。但是,使用服务网格的好处之一是,使用这种部署方式要容易得多。如第 11 章所述,服务网格是网络基础设施,用于协调服务与其他服务以及外部应用程序之间的所有通信。 除了承担微服务 Chassis 框架的一些责任外,服务网格还提供基于规则的 负载均衡和流量路由,可让您安全地同时运行多个版本的服务。稍后在此 部分中,您将看到可以将测试用户路由到服务的一个版本,并将最终用户路由到另一个版本。

Traditionally, separating deployments and releases in this way has been challenging because it requires a lot of work to implement it. But one of the benefits of using a service mesh is that using this style of deployment is a lot easier. A service mesh is, as described in chapter 11, networking infrastructure that mediates all communication between a service and other services and external applications. In addition to taking on some of the responsibilities of the microservice chassis framework, a service mesh provides rule-based load balancing and traffic routing that lets you safely run multiple versions of your services simultaneously. Later in this section, you’ll see that you can route test users to one version of a service and end-users to a different version, for example.

第 11 章所述,有多种服务网格可供选择。在本节中,我将向您展示如何使用 Istio,这是一种流行的开源服务 Mesh 最初由 Google、IBM 和 Lyft 开发。首先,我将简要概述 Istio 及其众多功能中的一些功能。 接下来,我将介绍如何使用 Istio 部署应用程序。之后,我将演示如何使用其流量路由功能来 部署并发布对服务的升级。

As described in chapter 11, there are several service meshes to choose from. In this section, I show you how to use Istio, a popular, open source service mesh originally developed by Google, IBM, and Lyft. I begin by providing a brief overview of Istio and a few of its many features. Next I describe how to deploy an application using Istio. After that, I show how to use its traffic-routing capabilities to deploy and release an upgrade to a service.

Istio 服务网格概述

Istio 网站将 Istio 描述为“用于连接、管理和保护微服务的开放平台”(https://istio.io)。它是一个网络层,您的所有服务的网络流量都流经该层。Istio 具有一组丰富的功能 分为四大类:

The Istio website describes Istio as an “An open platform to connect, manage, and secure microservices” (https://istio.io). It’s a networking layer through which all of your services’ network traffic flows. Istio has a rich set of features organized into four main categories:

  • 流量管理包括服务发现、负载均衡、路由规则和断路器
  • Traffic managementIncludes service discovery, load balancing, routing rules, and circuit breakers
  • 安全性使用传输层安全性 (TLS) 保护服务间通信
  • SecuritySecures interservice communication using Transport Layer Security (TLS)
  • 遥测 - 捕获有关网络流量的指标并实施分布式跟踪
  • TelemetryCaptures metrics about network traffic and implements distributed tracing
  • 策略执行强制实施配额和速率限制
  • Policy enforcementEnforces quotas and rate limits

本节重点介绍 Istio 的流量管理功能。

This section focuses on Istio’s traffic-management capabilities.

图 12.11 显示了 Istio 的架构。它由一个控制平面和一个数据平面组成。控制平面实现管理功能, 包括配置数据层面以路由流量。数据平面由 Envoy 代理组成,每个服务实例一个。

Figure 12.11 shows Istio’s architecture. It consists of a control plane and a data plane. The control plane implements management functions, including configuring the data plane to route traffic. The data plane consists of Envoy proxies, one per service instance.

图 12.11.Istio 由一个控制平面和一个数据平面组成,其组件包括 Pilot 和 Mixer,以及一个数据平面,它由 Envoy 组成 代理服务器。Pilot 从底层基础设施中提取有关已部署服务的信息,并配置 数据平面。Mixer 执行配额等策略并收集遥测数据,并将其报告给监控基础设施 服务器。Envoy 代理服务器将流量路由到服务中。每个服务实例有一个 Envoy 代理服务器。

控制平面的两个主要组件是 Pilot 和 Mixer。Pilot 从底层基础设施中提取有关已部署服务的信息。例如,在 Kubernetes 上运行时, Pilot 检索服务和运行状况良好的 Pod。它将 Envoy 代理配置为根据定义的 路由规则。Mixer 从 Envoy 代理收集遥测数据并执行策略。

The two main components of the control plane are the Pilot and the Mixer. The Pilot extracts information about deployed services from the underlying infrastructure. When running on Kubernetes, for example, the Pilot retrieves the services and healthy pods. It configures the Envoy proxies to route traffic according to the defined routing rules. The Mixer collects telemetry from the Envoy proxies and enforces policies.

Istio Envoy 代理是 Envoy (www.envoyproxy.io) 的修改版本。它是一个高性能代理,支持多种协议,包括 TCP、HTTP 等低级协议和 HTTPS 和更高级别的协议。它还了解 MongoDB、Redis 和 DynamoDB 协议。Envoy 还支持健壮的 具有断路器、速率限制和自动重试等功能的服务间通信。它可以保护通信 在应用程序内使用 TLS 进行 Envoy 间通信。

The Istio Envoy proxy is a modified version of Envoy (www.envoyproxy.io). It’s a high-performance proxy that supports a variety of protocols, including TCP, low-level protocols such as HTTP and HTTPS, and higher-level protocols. It also understands MongoDB, Redis, and DynamoDB protocols. Envoy also supports robust interservice communication with features such as circuit breakers, rate limiting, and automatic retries. It can secure communication within the application by using TLS for inter-Envoy communication.

Istio 将 Envoy 用作 sidecar,即与服务实例一起运行并实现横切的进程或容器 关注。在 Kubernetes 上运行时,Envoy 代理是服务 Pod 中的容器。在其他环境中 没有 Pod 概念,Envoy 与服务在同一个容器中运行。进出服务的所有流量都流经 它的 Envoy 代理,根据控制平面给它的路由规则路由流量。例如,直接 服务 → 服务通信将变为 Service → Source Envoy → Destination Envoy → Service。

Istio uses Envoy as a sidecar, a process or container that runs alongside the service instance and implements cross-cutting concerns. When running on Kubernetes, the Envoy proxy is a container within the service’s pod. In other environments that don’t have the pod concept, Envoy runs in the same container as the service. All traffic to and from a service flows through its Envoy proxy, which routes traffic according to the routing rules given to it by the control plane. For example, direct Service → Service communication becomes Service → Source Envoy → Destination Envoy → Service.

图案:边车

在与服务实例一起运行的 sidecar 进程或容器中实现横切关注点。请参阅 http://microservices.io/patterns/deployment/sidecar.html

Implement cross-cutting concerns in a sidecar process or container that runs alongside the service instance. See http://microservices.io/patterns/deployment/sidecar.html.

Istio 是使用 Kubernetes 风格的 YAML 配置文件配置的。它有一个名为 的命令行工具,类似于 。用于创建、更新和删除规则和策略。在 Kubernetes 上使用 Istio 时,您还可以使用 .istioctlkubectlistioctlkubectl

Istio is configured using Kubernetes-style YAML configuration files. It has a command-line tool called istioctl that’s similar to kubectl. You use istioctl for creating, updating, and deleting rules and policies. When using Istio on Kubernetes, you can also use kubectl.

让我们看看如何使用 Istio 部署服务。

Let’s look at how to deploy a service with Istio.

使用 Istio 部署服务

在 Istio 上部署服务非常简单。您可以为应用程序的每个服务定义一个 Kubernetes 和一个。清单 12.7 显示了 和 的定义。尽管它与我之前展示的定义几乎相同,但也存在一些差异。那是因为 Istio 已经 Kubernetes 服务和 Pod 的一些要求:ServiceDeploymentServiceDeploymentConsumer Service

Deploying a service on Istio is quite straightforward. You define a Kubernetes Service and a Deployment for each of your application’s services. Listing 12.7 shows the definition of Service and Deployment for Consumer Service. Although it’s almost identical to the definitions I showed earlier, there are a few differences. That’s because Istio has a few requirements for the Kubernetes services and pods:

  • Kubernetes 服务端口必须使用 的 Istio 命名约定,其中 protocol 为 、 、 、 或 。如果端口未命名,Istio 会将该端口视为 TCP 端口,并且不会应用基于规则的路由。<protocol>[-<suffix>]httphttp2grpcmongoredis
  • A Kubernetes service port must use the Istio naming convention of <protocol>[-<suffix>], where protocol is http, http2, grpc, mongo, or redis. If the port is unnamed, Istio will treat the port as a TCP port and won’t apply rule-based routing.
  • Pod 应该有一个标签,例如 ,用于标识服务,以支持 Istio 分布式跟踪。appapp: ftgo-consumer-service
  • A pod should have an app label such as app: ftgo-consumer-service, which identifies the service, in order to support Istio distributed tracing.
  • 为了同时运行服务的多个版本,Kubernetes 部署的名称必须包含版本 例如 、 等。Deployment 的 Pod 应该有一个标签,例如 ,它指定版本,以便 Istio 可以路由到特定版本。ftgo-consumer-service-v1ftgo-consumer-service-v2versionversion: v1
  • In order to run multiple versions of a service simultaneously, the name of a Kubernetes deployment must include the version, such as ftgo-consumer-service-v1, ftgo-consumer-service-v2, and so on. A deployment’s pods should have a version label, such as version: v1, which specifies the version, so that Istio can route to a specific version.
清单 12.7.使用 Istio 部署 Consumer Service
apiVersion: v1
kind: Service
metadata:
  name: ftgo-consumer-service
spec:
  ports:
  - name: http                                    1
    port: 8080
    targetPort: 8080
  selector:
    app: ftgo-consumer-service
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: ftgo-consumer-service-v2                   2
 spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ftgo-consumer-service                3
        version: v2
    spec:
      containers:
      - image: image: ftgo-consumer-service:v2    4
 ...
apiVersion: v1
kind: Service
metadata:
  name: ftgo-consumer-service
spec:
  ports:
  - name: http                                    1
    port: 8080
    targetPort: 8080
  selector:
    app: ftgo-consumer-service
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: ftgo-consumer-service-v2                   2
 spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ftgo-consumer-service                3
        version: v2
    spec:
      containers:
      - image: image: ftgo-consumer-service:v2    4
 ...

  • 1 个命名端口
  • 1 Named port
  • 2 版本化部署
  • 2 Versioned deployment
  • 3 推荐标签
  • 3 Recommended labels
  • 4 图像版本
  • 4 Image version

到目前为止,您可能想知道如何在服务的 pod 中运行 Envoy 代理容器。幸运的是,Istio 出色地做到了这一点 通过自动修改 Pod 定义以包含 Envoy 代理来轻松。有两种方法可以做到这一点。首先是 使用手动 sidecar 注入并运行命令:istioctl kube-inject

By now, you may be wondering how to run the Envoy proxy container in the service’s pod. Fortunately, Istio makes that remarkably easy by automating modifying the pod definition to include the Envoy proxy. There are two ways to do that. The first is to use manual sidecar injection and run the istioctl kube-inject command:

istioctl kube-inject -f ftgo-consumer-service/src/deployment/kubernetes/ftgo-
     consumer-service.yml | kubectl apply -f -
istioctl kube-inject -f ftgo-consumer-service/src/deployment/kubernetes/ftgo-
     consumer-service.yml | kubectl apply -f -

此命令读取 Kubernetes YAML 文件并输出包含 Envoy 代理的修改后的配置。修改后的 然后,将配置通过管道传输到 。kubectl apply

This command reads a Kubernetes YAML file and outputs the modified configuration containing the Envoy proxy. The modified configuration is then piped into kubectl apply.

将 Envoy sidecar 添加到 pod 的第二种方法是使用自动 sidecar 注入。启用此功能后,您可以使用 .Kubernetes 会自动调用 Istio 来修改 Pod 定义以包含 Envoy 代理。kubectl apply

The second way to add the Envoy sidecar to the pod is to use automatic sidecar injection. When this feature is enabled, you deploy a service using kubectl apply. Kubernetes automatically invokes Istio to modify the pod definition to include the Envoy proxy.

如果你描述服务的 Pod,你会发现它包含的不仅仅是服务的容器:

If you describe your service’s pod, you’ll see that it consists of more than your service’s container:

$ kubectl describe po ftgo-consumer-service-7db65b6f97-q9jpr

Name:           ftgo-consumer-service-7db65b6f97-q9jpr
Namespace:      default
  ...
Init Containers:
  istio-init:                                                   1
     Image:         docker.io/istio/proxy_init:0.8.0
    ....
Containers:
  ftgo-consumer-service:                                        2
     Image:          msapatterns/ftgo-consumer-service:latest
    ...
  istio-proxy:
    Image:         docker.io/istio/proxyv2:0.8.0                3
 ...
$ kubectl describe po ftgo-consumer-service-7db65b6f97-q9jpr

Name:           ftgo-consumer-service-7db65b6f97-q9jpr
Namespace:      default
  ...
Init Containers:
  istio-init:                                                   1
     Image:         docker.io/istio/proxy_init:0.8.0
    ....
Containers:
  ftgo-consumer-service:                                        2
     Image:          msapatterns/ftgo-consumer-service:latest
    ...
  istio-proxy:
    Image:         docker.io/istio/proxyv2:0.8.0                3
 ...

  • 1 初始化 Pod
  • 1 Initializes the pod
  • 2 服务容器
  • 2 The service container
  • 3 Envoy 容器
  • 3 The Envoy container

现在我们已经部署了服务,让我们看看如何定义路由规则。

Now that we’ve deployed the service, let’s look at how to define routing rules.

创建路由到 v1 版本的路由规则

假设您部署了部署。在没有路由规则的情况下,Istio 会在服务的所有版本之间对请求进行负载均衡。因此, 在 的版本 1 和 2 之间进行负载均衡,这违背了使用 Istio 的目的。为了安全地推出新版本,您必须定义一个路由规则,该规则 将所有流量路由到当前 v1 版本。ftgo-consumer-service-v2ftgo-consumer-service

Let’s imagine that you deployed the ftgo-consumer-service-v2 deployment. In the absence of routing rules, Istio load balances requests across all versions of a service. It would, therefore, load balance across versions 1 and 2 of ftgo-consumer-service, which defeats the purpose of using Istio. In order to safely roll out a new version, you must define a routing rule that routes all traffic to the current v1 version.

图 12.12 显示了将所有流量路由到 的路由规则。它由两个 Istio 对象组成:a 和 a 。Consumer Servicev1VirtualServiceDestinationRule

Figure 12.12 shows the routing rule for Consumer Service that routes all traffic to v1. It consists of two Istio objects: a VirtualService and a DestinationRule.

图 12.12.的路由规则,用于将所有流量路由到 v1 Pod。它由一个 组成,前者将其流量路由到 v1 子集,后者将 v1 子集定义为标有 的 Pod。定义此规则后,您可以安全地部署新版本,而无需最初将任何流量路由到该版本。Consumer ServiceVirtualServiceDestinationRuleversion: v1

A 定义如何路由一个或多个主机名的请求。在此示例中,定义单个主机名的路由: 。以下是 for 的定义:VirtualServiceVirtualServiceftgo-consumer-serviceVirtualServiceConsumer Service

A VirtualService defines how to route requests for one or more hostnames. In this example, VirtualService defines the routes for a single hostname: ftgo-consumer-service. Here’s the definition of VirtualService for Consumer Service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: ftgo-consumer-service
spec:
  hosts:
  - ftgo-consumer-service                 1
  http:
    - route:
      - destination:
          host: ftgo-consumer-service     2
          subset: v1                      3
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: ftgo-consumer-service
spec:
  hosts:
  - ftgo-consumer-service                 1
  http:
    - route:
      - destination:
          host: ftgo-consumer-service     2
          subset: v1                      3

  • 1 适用于消费者服务
  • 1 Applies to the Consumer Service
  • 2 条通往消费者服务的途径
  • 2 Routes to Consumer Service
  • 3 v1 子集
  • 3 The v1 subset

它路由对 的 Pod 子集的所有请求。稍后,我将展示更复杂的示例,这些示例基于 HTTP 请求进行路由,并在多个加权目标之间进行负载均衡。v1Consumer Service

It routes all requests for the v1 subset of the pods of Consumer Service. Later, I show more complex examples that route based on HTTP requests and load balance across multiple weighted destinations.

除了 之外,您还必须定义一个 ,它为服务定义一个或多个 Pod 子集。Pod 的子集通常是服务版本。A 还可以定义流量策略,例如负载均衡算法。这是 for :VirtualServiceDestinationRuleDestinationRuleDestinationRuleConsumer Service

In addition to VirtualService, you must also define a DestinationRule, which defines one or more subsets of pods for a service. A subset of pods is typically a service version. A DestinationRule can also define traffic policies, such as the load-balancing algorithm. Here’s the DestinationRule for Consumer Service:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: ftgo-consumer-service
spec:
  host: ftgo-consumer-service
  subsets:
  - name: v1                 1
    labels:
      version: v1            2
  - name: v2
    labels:
      version: v2
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: ftgo-consumer-service
spec:
  host: ftgo-consumer-service
  subsets:
  - name: v1                 1
    labels:
      version: v1            2
  - name: v2
    labels:
      version: v2

  • 1 子集的名称
  • 1 The name of the subset
  • 2 子集的 Pod 选择器
  • 2 The pod selector for the subset

这定义了两个 Pod 子集: 和 。子集选择标签为 的 Pod 。子集选择标签为 的 Pod 。DestinationRulev1v2v1version: v1v2version: v2

This DestinationRule defines two subsets of pods: v1 and v2. The v1 subset selects pods with the label version: v1. The v2 subset selects pods with the label version: v2.

定义这些规则后,Istio 将仅路由标记为 .现在可以安全地部署 .version: v1v2

Once you’ve defined these rules, Istio will only route traffic pods labeled version: v1. It’s now safe to deploy v2.

部署 Consumer Service 版本 2

以下是版本 2 的摘录:DeploymentConsumer Service

Here’s an excerpt of the version 2 Deployment for Consumer Service:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ftgo-consumer-service-v2      1
 spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ftgo-consumer-service
        version: v2                   2
 ...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ftgo-consumer-service-v2      1
 spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ftgo-consumer-service
        version: v2                   2
 ...

  • 1 版本 2
  • 1 Version 2
  • 2 Pod 标有版本
  • 2 Pod is labeled with the version

此部署称为 。它使用 .创建此部署后,两个版本的 都将运行。但是由于路由规则的原因,Istio 不会将任何流量路由到 .现在,您已准备好将一些测试流量路由到 。ftgo-consumer-service-v2version: v2ftgo-consumer-servicev2v2

This deployment is called ftgo-consumer-service-v2. It labels its pods with version: v2. After creating this deployment, both versions of the ftgo-consumer-service will be running. But because of the routing rules, Istio won’t route any traffic to v2. You’re now ready to route some test traffic to v2.

将测试流量路由到版本 2

部署服务的新版本后,下一步是对其进行测试。假设来自测试用户的请求 有一个标题。我们可以通过进行以下更改来增强具有此标头的 to route 请求到实例:testuserftgo-consumer-service VirtualServicev2

Once you’ve deployed a new version of a service, the next step is to test it. Let’s suppose that requests from test users have a testuser header. We can enhance the ftgo-consumer-service VirtualService to route requests with this header to v2 instances by making the following change:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: ftgo-consumer-service
spec:
  hosts:
  - ftgo-consumer-service
  http:
    - match:
      - headers:
          testuser:
            regex: "^.+$"                1
      route:
      - destination:
          host: ftgo-consumer-service
          subset: v2                     2
     - route:
      - destination:
          host: ftgo-consumer-service
          subset: v1                     3
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: ftgo-consumer-service
spec:
  hosts:
  - ftgo-consumer-service
  http:
    - match:
      - headers:
          testuser:
            regex: "^.+$"                1
      route:
      - destination:
          host: ftgo-consumer-service
          subset: v2                     2
     - route:
      - destination:
          host: ftgo-consumer-service
          subset: v1                     3

  • 1 匹配非空 testuser 标头
  • 1 Matches a nonblank testuser header
  • 2 将测试用户路由到 v2
  • 2 Routes test users to v2
  • 3 将其他人路由到 v1
  • 3 Routes everyone else to v1

除了原始的默认路由外,还有一个路由规则,用于将带有 header 的请求路由到子集。更新规则后,您现在可以测试 。然后,一旦您确信 v2 正在运行,就可以将一些生产流量路由到它。让我们看看如何 这样做。VirtualServicetestuserv2Consumer Service

In addition to the original default route, VirtualService has a routing rule that routes requests with the testuser header to the v2 subset. After you’ve updated the rules, you can now test Consumer Service. Then, once you feel confident that the v2 is working, you can route some production traffic to it. Let’s look at how to do that.

将生产流量路由到版本 2

测试新部署的服务后,下一步是开始将生产流量路由到它。一个好的策略 最初只路由少量流量。例如,这里有一条规则,将 95% 的流量路由到 v1 和 5% 到 v2:

After you’ve tested a newly deployed service, the next step is to start routing production traffic to it. A good strategy is to initially only route a small amount of traffic. Here, for example, is a rule that routes 95% of traffic to v1 and 5% to v2:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: ftgo-consumer-service
spec:
  hosts:
  - ftgo-consumer-service
  http:
    - route:
      - destination:
          host: ftgo-consumer-service
          subset: v1
        weight: 95
      - destination:
          host: ftgo-consumer-service
          subset: v2
        weight: 5
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: ftgo-consumer-service
spec:
  hosts:
  - ftgo-consumer-service
  http:
    - route:
      - destination:
          host: ftgo-consumer-service
          subset: v1
        weight: 95
      - destination:
          host: ftgo-consumer-service
          subset: v2
        weight: 5

当您确信该服务可以处理生产流量时,您可以逐步增加流量 转到版本 2 Pod,直到达到 100%。此时,Istio 不会将任何流量路由到 v1 Pod。你可以 在删除版本 1 之前,让它们运行一段时间 。Deployment

As you gain confidence that the service can handle production traffic, you can incrementally increase the amount of traffic going to the version 2 pods until it reaches 100%. At that point, Istio isn’t routing any traffic to the v1 pods. You could leave them running for a little while longer before deleting the version 1 Deployment.

通过让您轻松地将部署与发布分开,Istio 使服务的新版本的推出更加可靠。 然而,我只是触及了 Istio 功能的皮毛。截至撰写本文时,Istio 的当前版本为 0.8. 我很高兴看到它和其他服务网格成熟并成为生产环境的标准部分。

By letting you easily separate deployment from release, Istio makes rolling out a new version of a service much more reliable. Yet I’ve barely scratched the surface of Istio’s capabilities. As of the time of writing, the current version of Istio is 0.8. I’m excited to watch it and the other service meshes mature and become a standard part of a production environment.

12.5. 使用 Serverless 部署模式部署服务

12.5. Deploying services using the Serverless deployment pattern

特定于语言的打包(第 12.1 节)、服务即 VM(第 12.2 节)和服务即容器(第 12.3 节)模式都完全不同,但它们具有一些共同特征。首先是所有三种模式 您必须预先预置一些计算资源,可以是物理机、虚拟机或容器。一些部署 平台实施自动扩展,根据负载动态调整 VM 或容器的数量。但你永远 需要为某些 VM 或容器付费,即使它们处于空闲状态。

The Language-specific packaging (section 12.1), Service as a VM (section 12.2), and Service as a container (section 12.3) patterns are all quite different, but they share some common characteristics. The first is that with all three patterns you must preprovision some computing resources—either physical machines, virtual machines, or containers. Some deployment platforms implement autoscaling, which dynamically adjusts the number of VMs or containers based on the load. But you’ll always need to pay for some VMs or containers, even if they’re idle.

另一个常见特征是您负责系统管理。如果你运行的是任何类型的机器, 您必须修补操作系统。对于物理机,这还包括货架和堆叠。您也是 负责管理语言运行时。这是 Amazon 所说的 “无差别繁重的工作” 的一个例子。 从计算的早期开始,系统管理就是您需要做的事情之一。不过,事实证明,有一个解决方案:无服务器。

Another common characteristic is that you’re responsible for system administration. If you’re running any kind of machine, you must patch the operating system. In the case of physical machines, this also includes racking and stacking. You’re also responsible for administering the language runtime. This is an example of what Amazon called “undifferentiated heavy lifting.” Since the early days of computing, system administration has been one of those things you need to do. As it turns out, though, there’s a solution: serverless.

12.5.1. 使用 AWS Lambda 进行无服务器部署

12.5.1. Overview of serverless deployment with AWS Lambda

在 AWS re:Invent 2014 上,Amazon 首席技术官 Werner Vogels 介绍了 AWS Lambda,并提出了一句令人惊奇的短语“奇迹发生在 函数、事件和数据交集。顾名思义,AWS Lambda 最初用于部署事件驱动型 服务业。之所以“神奇”,是因为正如您将看到的,AWS Lambda 是无服务器部署技术的一个例子。

At AWS Re:Invent 2014, Werner Vogels, the CTO of Amazon, introduced AWS Lambda with the amazing phrase “magic happens at the intersection of functions, events, and data.” As this phrase suggests, AWS Lambda was initially for deploying event-driven services. It’s “magic” because, as you’ll see, AWS Lambda is an example of serverless deployment technology.

无服务器部署技术

主要的公有云都提供无服务器部署选项,但 AWS Lambda 是最先进的。谷歌云 具有 Google Cloud 函数,截至撰写本文时,该函数处于测试阶段 (https://cloud.google.com/functions/)。Microsoft Azure 具有 Azure Functions (https://azure.microsoft.com/en-us/services/functions)。

The main public clouds all provide a serverless deployment option, although AWS Lambda is the most advanced. Google Cloud has Google Cloud functions, which as of the time writing is in beta (https://cloud.google.com/functions/). Microsoft Azure has Azure functions (https://azure.microsoft.com/en-us/services/functions).

还有一些开源无服务器框架,例如 Apache Openwhisk (https://openwhisk.apache.org) 和 Fission for Kubernetes (https://fission.io),您可以在自己的基础设施上运行它们。但我并不完全相信它们的价值。您需要管理基础设施 运行无服务器框架,这听起来并不完全像无服务器。此外,正如您将在本节后面看到的那样,无服务器提供了一个受约束的编程模型,以换取最小的 系统管理。如果您需要管理基础设施,那么您有限制,但没有好处。

There are also open source serverless frameworks, such as Apache Openwhisk (https://openwhisk.apache.org) and Fission for Kubernetes (https://fission.io), that you can run on your own infrastructure. But I’m not entirely convinced of their value. You need to manage the infrastructure that runs the serverless framework—which doesn’t exactly sound like serverless. Moreover, as you’ll see later in this section, serverless provides a constrained programming model in exchange for minimal system administration. If you need to manage infrastructure, then you have the constraints without the benefit.

AWS Lambda 支持 Java、NodeJS、C#、GoLang 和 Python。lambda 函数是一种无状态服务。它通常通过调用 AWS 服务来处理请求。例如,一个 lambda 函数是 在将图像上传到 S3 存储桶时调用,可以将项目插入 DynamoDB IMAGES 表并发布消息 发送到 Kinesis 以触发图像处理。lambda 函数还可以调用第三方 Web 服务。

AWS Lambda supports Java, NodeJS, C#, GoLang, and Python. A lambda function is a stateless service. It typically handles requests by invoking AWS services. For example, a lambda function that’s invoked when an image is uploaded to an S3 bucket could insert an item into a DynamoDB IMAGES table and publish a message to Kinesis to trigger image processing. A lambda function can also invoke third-party web services.

要部署服务,请将应用程序打包为 ZIP 文件或 JAR 文件,将其上传到 AWS Lambda,然后指定名称 调用以处理请求(也称为事件)的函数。AWS Lambda 会自动运行足够多的微服务实例来处理传入请求。您需要为每个 请求。当然,魔鬼在细节中,稍后你会看到这一点 AWS Lambda 有限制。但是,作为开发人员的您和组织中的任何人都不需要担心的概念 服务器、虚拟机或容器的任何方面都非常强大。

To deploy a service, you package your application as a ZIP file or JAR file, upload it to AWS Lambda, and specify the name of the function to invoke to handle a request (also called an event). AWS Lambda automatically runs enough instances of your microservice to handle incoming requests. You’re billed for each request based on the time taken and the memory consumed. Of course, the devil is in the details, and later you’ll see that AWS Lambda has limitations. But the notion that neither you as a developer nor anyone in your organization need worry about any aspect of servers, virtual machines, or containers is incredibly powerful.

模式:无服务器部署

使用公有云提供的无服务器部署机制部署服务。请参阅 http://microservices.io/patterns/deployment/serverless-deployment.html

Deploy services using a serverless deployment mechanism provided by a public cloud. See http://microservices.io/patterns/deployment/serverless-deployment.html.

12.5.2. 开发 lambda 函数

12.5.2. Developing a lambda function

与使用其他三种模式不同,您必须对 lambda 函数使用不同的编程模型。一个 lambda 函数的代码和打包取决于编程语言。Java lambda 函数是一个实现 generic interface 的 ,它由 AWS Lambda Java 核心库定义,如下面的清单所示。此接口采用两个类型参数:,这是输入类型,是输出类型。和 的类型取决于 lambda 处理的特定请求类型。RequestHandlerIOIO

Unlike when using the other three patterns, you must use a different programming model for your lambda functions. A lambda function’s code and the packaging depend on the programming language. A Java lambda function is a class that implements the generic interface RequestHandler, which is defined by the AWS Lambda Java core library and shown in the following listing. This interface takes two type parameters: I, which is the input type, and O, which is the output type. The type of I and O depend on the specific kind of request that the lambda handles.

清单 12.8.Java lambda 函数是实现接口的类。RequestHandler
public interface RequestHandler<I, O> {
    public O handleRequest(I input, Context context);
}
public interface RequestHandler<I, O> {
    public O handleRequest(I input, Context context);
}

该接口定义单个方法。此方法有两个参数,一个 input 对象和一个 context,它们提供对 lambda 执行环境 (lambda execution environment) 的访问。 例如请求 ID。该方法返回一个 output 对象。对于处理由 AWS API Gateway 代理的 HTTP 请求的 lambda 函数,分别是 和 。您很快就会看到,处理程序函数与旧式 Java EE servlet 非常相似。RequestHandlerhandleRequest()handleRequest()IOAPIGatewayProxyRequestEventAPIGatewayProxyResponseEvent

The RequestHandler interface defines a single handleRequest() method. This method has two parameters, an input object and a context, which provide access to the lambda execution environment, such as the request ID. The handleRequest() method returns an output object. For lambda functions that handle HTTP requests that are proxied by an AWS API Gateway, I and O are APIGatewayProxyRequestEvent and APIGatewayProxyResponseEvent, respectively. As you’ll soon see, the handler functions are quite similar to old-style Java EE servlets.

Java lambda 打包为 ZIP 文件或 JAR 文件。JAR 文件是由 Maven Shade 插件。ZIP 文件在根目录中包含类,在该目录中包含 JAR 依赖项。稍后,我将展示 Gradle 项目如何创建 ZIP 文件。但首先,让我们看看调用 lambda 函数。lib

A Java lambda is packaged as either a ZIP file or a JAR file. A JAR file is an uber JAR (or fat JAR) created by, for example, the Maven Shade plugin. A ZIP file has the classes in the root directory and JAR dependencies in the lib directory. Later, I show how a Gradle project can create a ZIP file. But first, let’s look at the different ways of invoking lambda function.

12.5.3. 调用 lambda 函数

12.5.3. Invoking lambda functions

有四种方法可以调用 lambda 函数:

There are four ways to invoke a lambda function:

  • HTTP 请求
  • HTTP requests
  • AWS 服务生成的事件
  • Events generated by AWS services
  • 计划的调用
  • Scheduled invocations
  • 直接使用 API 调用
  • Directly using an API call

让我们看看每一个。

Let’s look at each one.

处理 HTTP 请求

调用 lambda 函数的一种方法是配置 AWS API Gateway 以将 HTTP 请求路由到您的 lambda。API 网关 将您的 Lambda 函数公开为 HTTPS 终端节点。它充当 HTTP 代理,使用 HTTP 调用 lambda 函数 request 对象,并期望 lambda 函数返回 HTTP 响应对象。将 API 网关与 AWS Lambda 结合使用 例如,您可以将 RESTful 服务部署为 lambda 函数。

One way to invoke a lambda function is to configure an AWS API Gateway to route HTTP requests to your lambda. The API gateway exposes your lambda function as an HTTPS endpoint. It functions as an HTTP proxy, invokes the lambda function with an HTTP request object, and expects the lambda function to return an HTTP response object. By using the API gateway with AWS Lambda you can, for example, deploy RESTful services as lambda functions.

处理 AWS 服务生成的事件

调用 lambda 函数的第二种方法是配置 lambda 函数以处理 AWS 服务生成的事件。 可以触发 lambda 函数的事件示例包括:

The second way to invoke a lambda function is to configure your lambda function to handle events generated by an AWS service. Examples of events that can trigger a lambda function include the following:

  • 在 S3 存储桶中创建一个对象。
  • An object is created in an S3 bucket.
  • 在 DynamoDB 表中创建、更新或删除项目。
  • An item is created, updated, or deleted in a DynamoDB table.
  • 可以从 Kinesis 流中读取消息。
  • A message is available to read from a Kinesis stream.
  • 通过 Simple 电子邮件服务接收电子邮件。
  • An email is received via the Simple email service.

由于与其他 AWS 服务集成,AWS Lambda 可用于各种任务。

Because of this integration with other AWS services, AWS Lambda is useful for a wide range of tasks.

定义计划的 lambda 函数

调用 lambda 函数的另一种方法是使用类似 Linux 的计划。您可以将 lambda 函数配置为定期调用,例如,每分钟、3 小时或 7 小时 日。或者,您也可以使用表达式来指定 AWS 何时应调用 lambda。 表达式为您提供了极大的灵活性。例如,您可以将 lambda 配置为在下午 2:15 调用。周一至 星期五。croncroncron

Another way to invoke a lambda function is to use a Linux cron-like schedule. You can configure your lambda function to be invoked periodically—for example, every minute, 3 hours, or 7 days. Alternatively, you can use a cron expression to specify when AWS should invoke your lambda. cron expressions give you tremendous flexibility. For example, you can configure a lambda to be invoked at 2:15 p.m. Monday through Friday.

使用 Web 服务请求调用 lambda 函数

调用 lambda 函数的第四种方法是让您的应用程序使用 Web 服务请求调用它。Web 服务 request 指定 Lambda 函数的名称和输入事件数据。您的应用程序可以调用 lambda 函数 同步或异步。如果您的应用程序同步调用 lambda 函数,则 Web 服务的 HTTP 响应 包含 Lambda 函数的响应。否则,如果它异步调用 lambda 函数,则 Web 服务 response 指示 Lambda 的执行是否已成功启动。

The fourth way to invoke a lambda function is for your application to invoke it using a web service request. The web service request specifies the name of the lambda function and the input event data. Your application can invoke a lambda function synchronously or asynchronously. If your application invokes the lambda function synchronously, the web service’s HTTP response contains the response of the lambda function. Otherwise, if it invokes the lambda function asynchronously, the web service response indicates whether the execution of the lambda was successfully initiated.

12.5.4. 使用 lambda 函数的好处

12.5.4. Benefits of using lambda functions

使用 lambda 函数部署服务有几个好处:

Deploying services using lambda functions has several benefits:

  • 与许多 AWS 服务集成编写使用 AWS 服务(如 DynamoDB 和 Kinesis)生成的事件的 lambda 非常简单。 以及通过 AWS API Gateway 处理 HTTP 请求。
  • Integrated with many AWS servicesIt’s remarkably straightforward to write lambdas that consume events generated by AWS services, such as DynamoDB and Kinesis, and handle HTTP requests via the AWS API Gateway.
  • 消除了许多系统管理任务您不再负责低级系统管理。无需修补操作系统或运行时。如 因此,您可以专注于开发您的应用程序。
  • Eliminates many system administration tasksYou’re no longer responsible for low-level system administration. There are no operating systems or runtimes to patch. As a result, you can focus on developing your application.
  • 弹性AWS Lambda 运行处理负载所需的任意数量的应用程序实例。您没有预测的挑战 所需的容量,否则会面临 VM 或容器预置不足或过度预置的风险。
  • ElasticityAWS Lambda runs as many instances of your application as are needed to handle the load. You don’t have the challenge of predicting needed capacity or run the risk of underprovisioning or overprovisioning VMs or containers.
  • 基于使用量的定价与典型的 IaaS 云不同,即使虚拟机或容器处于空闲状态,它也按分钟或小时收费,而 AWS Lambda 仅 向您收取处理每个请求时消耗的资源的费用。
  • Usage-based pricingUnlike a typical IaaS cloud, which charges by the minute or hour for a VM or container even when it’s idle, AWS Lambda only charges you for the resources that are consumed while processing each request.

12.5.5. 使用 lambda 函数的缺点

12.5.5. Drawbacks of using lambda functions

如您所见,AWS Lambda 是一种非常方便的服务部署方式,但也存在一些明显的缺点和 局限性:

As you can see, AWS Lambda is an extremely convenient way to deploy services, but there are some significant drawbacks and limitations:

  • 长尾延迟由于 AWS Lambda 会动态运行您的代码,因此某些请求的延迟很高,因为 AWS 需要花费时间进行预置 应用程序的实例,并用于应用程序启动。这在运行基于 Java 的 服务,因为它们通常需要至少几秒钟才能启动。例如,描述 在下一部分中,需要一段时间才能启动。因此,AWS Lambda 可能不适合对延迟敏感的服务。
  • Long-tail latencyBecause AWS Lambda dynamically runs your code, some requests have high latency because of the time it takes for AWS to provision an instance of your application and for the application to start. This is particularly challenging when running Java-based services because they typically take at least several seconds to start. For instance, the example lambda function described in the next section takes a while to start up. Consequently, AWS Lambda may not be suited for latency-sensitive services.
  • 基于事件/请求的有限编程模型AWS Lambda 不用于部署长时间运行的服务,例如使用来自第三方的消息的服务 消息代理。
  • Limited event/request-based programming modelAWS Lambda isn’t intended to be used to deploy long-running services, such as a service that consumes messages from a third-party message broker.

由于这些缺点和限制,AWS Lambda 并不适合所有服务。但是在选择部署模式时, 我建议先评估无服务器部署是否支持您的服务要求,然后再考虑替代方案。

Because of these drawbacks and limitations, AWS Lambda isn’t a good fit for all services. But when choosing a deployment pattern, I recommend first evaluating whether serverless deployment supports your service’s requirements before considering alternatives.

12.6. 使用 AWS Lambda 和 AWS Gateway 部署 RESTful 服务

12.6. Deploying a RESTful service using AWS Lambda and AWS Gateway

我们来看看如何使用 AWS Lambda 进行部署。这是一项具有用于创建和管理餐厅的 REST API 的服务。它没有长期连接 到 Apache Kafka,因此它非常适合 AWS Lambda。图 12.13 显示了此服务的部署架构。该服务由多个 lambda 函数组成,每个 REST 终端节点一个函数。 AWS API Gateway 负责将 HTTP 请求路由到 lambda 函数。Restaurant Service

Let’s take a look at how to deploy Restaurant Service using AWS Lambda. It’s a service that has a REST API for creating and managing restaurants. It doesn’t have long-lived connections to Apache Kafka, for example, so it’s a good fit for AWS lambda. Figure 12.13 shows the deployment architecture for this service. The service consists of several lambda functions, one for each REST endpoint. An AWS API Gateway is responsible for routing HTTP requests to the lambda functions.

图 12.13.部署为 AWS Lambda 函数。AWS API Gateway 将 HTTP 请求路由到 AWS Lambda 函数,这些函数由请求实现 由 定义的处理程序类。Restaurant ServiceRestaurant Service

每个 lambda 函数都有一个请求处理程序类。lambda 函数调用该类,lambda 函数调用 .由于这些请求处理程序类实现了同一服务密切相关的方面,因此它们被打包在一起 相同的 ZIP 文件 .让我们看看服务的设计,包括那些处理程序类。ftgo-create-restaurantCreateRestaurantRequestHandlerftgo-find-restaurantFindRestaurantRequestHandlerrestaurant-service-aws-lambda.zip

Each lambda function has a request handler class. The ftgo-create-restaurant lambda function invokes the CreateRestaurantRequestHandler class, and the ftgo-find-restaurant lambda function invokes FindRestaurantRequestHandler. Because these request handler classes implement closely related aspects of the same service, they’re packaged together in the same ZIP file, restaurant-service-aws-lambda.zip. Let’s look at the design of the service, including those handler classes.

12.6.1. Restaurant Service 的 AWS Lambda 版本的设计

12.6.1. The design of the AWS Lambda version of Restaurant Service

如图 12.14 所示,该服务的架构与传统服务的架构非常相似。主要区别在于 Spring MVC 控制器已被替换 由 AWS Lambda 请求处理程序类。其余业务逻辑保持不变。

The architecture of the service, shown in figure 12.14, is quite similar to that of a traditional service. The main difference is that Spring MVC controllers have been replaced by AWS Lambda request handler classes. The rest of the business logic is unchanged.

图 12.14.基于 AWS Lambda 的设计。表示层由请求处理程序类组成,这些类实现 lambda 函数。他们调用业务 tier,它以传统样式编写,由服务类、实体和存储库组成。Restaurant Service

该服务由一个表示层组成,该表示层由请求处理程序组成,AWS Lambda 调用这些处理程序来处理 HTTP 请求和传统业务层。业务层由 、JPA 实体和封装数据库的 组成。RestaurantServiceRestaurantRestaurantRepository

The service consists of a presentation tier consisting of the request handlers, which are invoked by AWS Lambda to handle the HTTP requests, and a traditional business tier. The business tier consists of RestaurantService, the Restaurant JPA entity, and RestaurantRepository, which encapsulates the database.

让我们看一下这个类。FindRestaurantRequestHandler

Let’s take a look at the FindRestaurantRequestHandler class.

FindRestaurantRequestHandler 类

该类实现端点。这个类和其他请求处理程序类一起是类层次结构的叶子,如图 12.15 所示。层次结构的根是 ,它是 AWS 开发工具包的一部分。它的 abstract 子类处理错误并注入依赖项。FindRestaurantRequestHandlerGET /restaurant/{restaurantId}RequestHandler

The FindRestaurantRequestHandler class implements the GET /restaurant/{restaurantId} endpoint. This class along with the other request handler classes are the leaves of the class hierarchy shown in figure 12.15. The root of the hierarchy is RequestHandler, which is part of the AWS SDK. Its abstract subclasses handle errors and inject dependencies.

图 12.15.请求处理程序类的设计。抽象超类实现依赖关系注入和错误处理。

该类是 HTTP 请求处理程序的抽象基类。它捕获在请求处理期间引发的未经处理的异常 并返回响应。该类为请求处理程序实现依赖项注入。我稍后会介绍这些抽象超类,但首先 让我们看看 的代码。AbstractHttpHandler500 - internal server errorAbstractAutowiringHttpRequestHandlerFindRestaurantRequestHandler

The AbstractHttpHandler class is the abstract base class for HTTP request handlers. It catches unhandled exceptions thrown during request handling and returns a 500 - internal server error response. The AbstractAutowiringHttpRequestHandler class implements dependency injection for request handlers. I’ll describe these abstract superclasses shortly, but first let’s look at the code for FindRestaurantRequestHandler.

清单 12.9 显示了该类的代码。该类有一个方法,该方法将表示 HTTP 请求作为参数。它调用以查找餐厅并返回描述 HTTP 响应。FindRestaurantRequestHandlerFindRestaurantRequestHandlerhandleHttpRequest()APIGatewayProxyRequestEventRestaurantServiceAPIGatewayProxyResponseEvent

Listing 12.9 shows the code for the FindRestaurantRequestHandler class. The FindRestaurantRequestHandler class has a handleHttpRequest() method, which takes an APIGatewayProxyRequestEvent representing an HTTP request as a parameter. It invokes RestaurantService to find the restaurant and returns an APIGatewayProxyResponseEvent describing the HTTP response.

清单 12.9.的 handler 类GET /restaurant/{restaurantId}
public class FindRestaurantRequestHandler
     extends AbstractAutowiringHttpRequestHandler {

  @Autowired
  private RestaurantService restaurantService;

  @Override
  protected Class<?> getApplicationContextClass() {
    return CreateRestaurantRequestHandler.class;                         1
  }

  @Override
  protected APIGatewayProxyResponseEvent
       handleHttpRequest(APIGatewayProxyRequestEvent request, Context context) {
    long restaurantId;
    try {
      restaurantId = Long.parseLong(request.getPathParameters()
               .get("restaurantId"));
    } catch (NumberFormatException e) {
      return makeBadRequestResponse(context);                            2
     }

    Optional<Restaurant> possibleRestaurant = restaurantService.findById(restaur
     antId);

    return possibleRestaurant                                            3
             .map(this::makeGetRestaurantResponse)
            .orElseGet(() -> makeRestaurantNotFoundResponse(context,
                                   restaurantId));

  }

  private APIGatewayProxyResponseEvent makeBadRequestResponse(Context context) {
    ...
  }

  private APIGatewayProxyResponseEvent
      makeRestaurantNotFoundResponse(Context context, long restaurantId) { ... }

  private  APIGatewayProxyResponseEvent
                        makeGetRestaurantResponse(Restaurant restaurant) { ... }
}
public class FindRestaurantRequestHandler
     extends AbstractAutowiringHttpRequestHandler {

  @Autowired
  private RestaurantService restaurantService;

  @Override
  protected Class<?> getApplicationContextClass() {
    return CreateRestaurantRequestHandler.class;                         1
  }

  @Override
  protected APIGatewayProxyResponseEvent
       handleHttpRequest(APIGatewayProxyRequestEvent request, Context context) {
    long restaurantId;
    try {
      restaurantId = Long.parseLong(request.getPathParameters()
               .get("restaurantId"));
    } catch (NumberFormatException e) {
      return makeBadRequestResponse(context);                            2
     }

    Optional<Restaurant> possibleRestaurant = restaurantService.findById(restaur
     antId);

    return possibleRestaurant                                            3
             .map(this::makeGetRestaurantResponse)
            .orElseGet(() -> makeRestaurantNotFoundResponse(context,
                                   restaurantId));

  }

  private APIGatewayProxyResponseEvent makeBadRequestResponse(Context context) {
    ...
  }

  private APIGatewayProxyResponseEvent
      makeRestaurantNotFoundResponse(Context context, long restaurantId) { ... }

  private  APIGatewayProxyResponseEvent
                        makeGetRestaurantResponse(Restaurant restaurant) { ... }
}

  • 1 用于应用程序上下文的 Spring Java 配置类
  • 1 The Spring Java configuration class to use for the application context
  • 2 如果 restaurantId 缺失或无效,则返回 400 - 错误的请求响应
  • 2 Returns a 400 - bad request response if the restaurantId is missing or invalid
  • 3 返回餐厅或 404 - 未找到响应
  • 3 Returns either the restaurant or a 404 - not found response

如您所见,它与 servlet 非常相似,不同之处在于它不是采用 and 返回的方法,而是具有采用 and 返回的方法 ,该方法采用 and 返回。service()HttpServletRequestHttpServletResponsehandleHttpRequest()APIGatewayProxyRequestEventAPIGatewayProxyResponseEvent

As you can see, it’s quite similar to a servlet, except that instead of a service() method, which takes an HttpServletRequest and returns HttpServletResponse, it has a handleHttpRequest(), which takes an APIGatewayProxyRequestEvent and returns APIGatewayProxyResponseEvent.

现在让我们看看它的 superclass,它实现了依赖注入。

Let’s now take a look at its superclass, which implements dependency injection.

使用 AbstractAutowiringHttpRequestHandler 类的依赖项注入

AWS Lambda 函数既不是 Web 应用程序,也不是具有方法的应用程序。但是,如果无法使用我们已经习惯的 Spring Boot 功能,那将是一种耻辱。该类(如下面的清单所示)为请求处理程序实现依赖关系注入。它会在处理第一个请求之前创建 using 和 autowire 依赖项。子类(如 must 实现该方法)。main()AbstractAutowiringHttpRequestHandlerApplicationContextSpringApplication.run()FindRestaurantRequestHandlergetApplicationContextClass()

An AWS Lambda function is neither a web application nor an application with a main() method. But it would be a shame to not be able to use the features of Spring Boot that we’ve been accustomed to. The AbstractAutowiringHttpRequestHandler class, shown in the following listing, implements dependency injection for request handlers. It creates an ApplicationContext using SpringApplication.run() and autowires dependencies prior to handling the first request. Subclasses such as FindRestaurantRequestHandler must implement the getApplicationContextClass() method.

清单 12.10.实现依赖关系注入的抽象RequestHandler
public abstract class AbstractAutowiringHttpRequestHandler
     extends AbstractHttpHandler {

  private static ConfigurableApplicationContext ctx;
  private ReentrantReadWriteLock ctxLock = new ReentrantReadWriteLock();
  private boolean autowired = false;

  protected synchronized ApplicationContext getAppCtx() {               1
     ctxLock.writeLock().lock();
    try {
      if (ctx == null) {
        ctx =  SpringApplication.run(getApplicationContextClass());
      }
      return ctx;
    } finally {
      ctxLock.writeLock().unlock();
    }
  }

  @Override
  protected void
        beforeHandling(APIGatewayProxyRequestEvent request, Context context) {
    super.beforeHandling(request, context);
    if (!autowired) {
      getAppCtx().getAutowireCapableBeanFactory().autowireBean(this);   2
       autowired = true;
    }
  }

  protected abstract Class<?> getApplicationContextClass();             3
 }
public abstract class AbstractAutowiringHttpRequestHandler
     extends AbstractHttpHandler {

  private static ConfigurableApplicationContext ctx;
  private ReentrantReadWriteLock ctxLock = new ReentrantReadWriteLock();
  private boolean autowired = false;

  protected synchronized ApplicationContext getAppCtx() {               1
     ctxLock.writeLock().lock();
    try {
      if (ctx == null) {
        ctx =  SpringApplication.run(getApplicationContextClass());
      }
      return ctx;
    } finally {
      ctxLock.writeLock().unlock();
    }
  }

  @Override
  protected void
        beforeHandling(APIGatewayProxyRequestEvent request, Context context) {
    super.beforeHandling(request, context);
    if (!autowired) {
      getAppCtx().getAutowireCapableBeanFactory().autowireBean(this);   2
       autowired = true;
    }
  }

  protected abstract Class<?> getApplicationContextClass();             3
 }

  • 1 仅创建 Spring Boot 应用程序上下文一次
  • 1 Creates the Spring Boot application context just once
  • 2 在处理第一个请求之前,使用自动装配将依赖项注入请求处理程序
  • 2 Injects dependencies into the request handler using autowiring before handling the first request
  • 3 返回用于创建 ApplicationContext 的 @Configuration 类
  • 3 Returns the @Configuration class used to create ApplicationContext

此类将覆盖 定义的方法。它的方法在处理第一个请求之前使用自动装配注入依赖项。beforeHandling()AbstractHttpHandlerbeforeHandling()

This class overrides the beforeHandling() method defined by AbstractHttpHandler. Its beforeHandling() method injects dependencies using autowiring before handling the first request.

AbstractHttpHandler 类

final extend 的请求处理程序,如清单 12.11 所示。此类实现 和 。它的主要职责是捕获处理请求时引发的异常并引发 500 错误代码。Restaurant ServiceAbstractHttpHandlerRequestHandler<APIGatewayProxyRequestEventAPIGatewayProxyResponseEvent>

The request handlers for Restaurant Service ultimately extend AbstractHttpHandler, shown in listing 12.11. This class implements RequestHandler<APIGatewayProxyRequestEvent and APIGatewayProxyResponseEvent>. Its key responsibility is to catch exceptions thrown when handling a request and throw a 500 error code.

清单 12.11.捕获异常并返回 500 HTTP 响应的抽象RequestHandler
public abstract class AbstractHttpHandler implements
  RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

  private Logger log = LoggerFactory.getLogger(this.getClass());

  @Override
  public APIGatewayProxyResponseEvent handleRequest(
     APIGatewayProxyRequestEvent input, Context context) {
    log.debug("Got request: {}", input);
    try {
      beforeHandling(input, context);
      return handleHttpRequest(input, context);
    } catch (Exception e) {
      log.error("Error handling request id: {}", context.getAwsRequestId(), e);
      return buildErrorResponse(new AwsLambdaError(
              "Internal Server Error",
              "500",
              context.getAwsRequestId(),
              "Error handling request: " + context.getAwsRequestId() + " "
     + input.toString()));
    }
  }

  protected void beforeHandling(APIGatewayProxyRequestEvent request,
     Context context) {
    // do nothing
  }

  protected abstract APIGatewayProxyResponseEvent handleHttpRequest(
     APIGatewayProxyRequestEvent request, Context context);
}
public abstract class AbstractHttpHandler implements
  RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

  private Logger log = LoggerFactory.getLogger(this.getClass());

  @Override
  public APIGatewayProxyResponseEvent handleRequest(
     APIGatewayProxyRequestEvent input, Context context) {
    log.debug("Got request: {}", input);
    try {
      beforeHandling(input, context);
      return handleHttpRequest(input, context);
    } catch (Exception e) {
      log.error("Error handling request id: {}", context.getAwsRequestId(), e);
      return buildErrorResponse(new AwsLambdaError(
              "Internal Server Error",
              "500",
              context.getAwsRequestId(),
              "Error handling request: " + context.getAwsRequestId() + " "
     + input.toString()));
    }
  }

  protected void beforeHandling(APIGatewayProxyRequestEvent request,
     Context context) {
    // do nothing
  }

  protected abstract APIGatewayProxyResponseEvent handleHttpRequest(
     APIGatewayProxyRequestEvent request, Context context);
}

12.6.2. 将服务打包为 ZIP 文件

12.6.2. Packaging the service as ZIP file

在部署服务之前,我们必须将其打包为 ZIP 文件。我们可以使用以下内容轻松构建 ZIP 文件 Gradle 任务:

Before the service can be deployed, we must package it as a ZIP file. We can easily build the ZIP file using the following Gradle task:

task buildZip(type: Zip) {
    from compileJava
    from processResources
    into('lib') {
        from configurations.runtime
    }
}
task buildZip(type: Zip) {
    from compileJava
    from processResources
    into('lib') {
        from configurations.runtime
    }
}

此任务使用顶级的类和资源以及目录中的 JAR 依赖项构建一个 ZIP。lib

This task builds a ZIP with the classes and resources at the top level and the JAR dependencies in the lib directory.

现在我们已经构建了 ZIP 文件,让我们看看如何部署 lambda 函数。

Now that we’ve built the ZIP file, let’s look at how to deploy the lambda function.

12.6.3. 使用 Serverless 框架部署 lambda 函数

12.6.3. Deploying lambda functions using the Serverless framework

使用 AWS 提供的工具部署 lambda 函数和配置 API 网关非常繁琐。幸运的是, 无服务器开源项目使 lambda 函数的使用变得更加容易。使用 Serverless 时,您需要编写一个简单的文件来定义 lambda 函数及其 RESTful 终端节点。然后,Serverless 部署 lambda 函数并创建 并配置一个 API 网关,将请求路由到它们。serverless.yml

Using the tools provided by AWS to deploy lambda functions and configure the API gateway is quite tedious. Fortunately, the Serverless open source project makes using lambda functions a lot easier. When using Serverless, you write a simple serverless.yml file that defines your lambda functions and their RESTful endpoints. Serverless then deploys the lambda functions and creates and configures an API gateway that routes requests to them.

以下清单是 部署为 lambda 的摘录。serverless.ymlRestaurant Service

The following listing is an excerpt of the serverless.yml that deploys Restaurant Service as a lambda.

清单 12.12.部署 .serverless.ymlRestaurant Service
service: ftgo-application-lambda

provider:
  name: aws                                                     1
  runtime: java8
  timeout: 35
  region: ${env:AWS_REGION}
  stage: dev
  environment:                                                  2
    SPRING_DATASOURCE_DRIVER_CLASS_NAME: com.mysql.jdbc.Driver
    SPRING_DATASOURCE_URL: ...
    SPRING_DATASOURCE_USERNAME: ...
    SPRING_DATASOURCE_PASSWORD: ...

package:                                                        3
   artifact: ftgo-restaurant-service-aws-lambda/build/distributions/
     ftgo-restaurant-service-aws-lambda.zip


functions:                                                      4
   create-restaurant:
    handler: net.chrisrichardson.ftgo.restaurantservice.lambda
     .CreateRestaurantRequestHandler
    events:
      - http:
          path: restaurants
          method: post
  find-restaurant:
    handler: net.chrisrichardson.ftgo.restaurantservice.lambda
     .FindRestaurantRequestHandler
    events:
      - http:
          path: restaurants/{restaurantId}
          method: get
service: ftgo-application-lambda

provider:
  name: aws                                                     1
  runtime: java8
  timeout: 35
  region: ${env:AWS_REGION}
  stage: dev
  environment:                                                  2
    SPRING_DATASOURCE_DRIVER_CLASS_NAME: com.mysql.jdbc.Driver
    SPRING_DATASOURCE_URL: ...
    SPRING_DATASOURCE_USERNAME: ...
    SPRING_DATASOURCE_PASSWORD: ...

package:                                                        3
   artifact: ftgo-restaurant-service-aws-lambda/build/distributions/
     ftgo-restaurant-service-aws-lambda.zip


functions:                                                      4
   create-restaurant:
    handler: net.chrisrichardson.ftgo.restaurantservice.lambda
     .CreateRestaurantRequestHandler
    events:
      - http:
          path: restaurants
          method: post
  find-restaurant:
    handler: net.chrisrichardson.ftgo.restaurantservice.lambda
     .FindRestaurantRequestHandler
    events:
      - http:
          path: restaurants/{restaurantId}
          method: get

  • 1 指示无服务器在 AWS 上部署
  • 1 Tells serverless to deploy on AWS
  • 2 通过环境变量提供服务的外部化配置
  • 2 Supplies the service’s externalized configuration via environment variables
  • 3 包含 lambda 函数的 ZIP 文件
  • 3 The ZIP file containing the lambda functions
  • 4 个 Lambda 函数定义,由处理程序函数和 HTTP 终端节点组成
  • 4 Lambda function definitions consisting of the handler function and HTTP endpoint

然后,您可以使用该命令读取文件、部署 lambda 函数并配置 AWS API Gateway。稍等片刻后,您的服务即可访问 通过 API 网关的终端节点 URL。AWS Lambda 将为支持负载所需的每个 lambda 函数预置尽可能多的实例。如果您更改了代码,则可以通过重新构建来轻松更新 lambda ZIP 文件并重新运行 .不涉及服务器!serverless deployserverless.ymlRestaurant Serviceserverless deploy

You can then use the serverless deploy command, which reads the serverless.yml file, deploys the lambda functions, and configures the AWS API Gateway. After a short wait, your service will be accessible via the API gateway’s endpoint URL. AWS Lambda will provision as many instances of each Restaurant Service lambda function that are needed to support the load. If you change the code, you can easily update the lambda by rebuilding the ZIP file and rerunning serverless deploy. No servers involved!

基础设施的发展是显着的。不久前,我们在物理机上手动部署了应用程序。 如今,高度自动化的公有云提供了一系列虚拟部署选项。一种选择是将服务部署为虚拟服务 机器。或者更好的是,我们可以将服务打包为容器,并使用复杂的 Docker 编排框架进行部署 例如 Kubernetes。有时我们甚至避免完全考虑基础设施,将服务部署为轻量级、短暂的 Lambda 函数。

The evolution of infrastructure is remarkable. Not that long ago, we manually deployed applications on physical machines. Today, highly automated public clouds provide a range of virtual deployment options. One option is to deploy services as virtual machines. Or better yet, we can package services as containers and deploy them using sophisticated Docker orchestration frameworks such as Kubernetes. Sometimes we even avoid thinking about infrastructure entirely and deploy services as lightweight, ephemeral lambda functions.

总结

Summary

  • 您应该选择支持您的服务要求的最轻量级部署模式。评估选项 按以下顺序:Serverless、Containers、Virtual Machines 和特定于语言的软件包。
  • You should choose the most lightweight deployment pattern that supports your service’s requirements. Evaluate the options in the following order: serverless, containers, virtual machines, and language-specific packages.
  • 无服务器部署并不适合所有服务,因为存在长尾延迟,并且需要使用基于事件/请求的 编程模型。但是,当它很合适时,无服务器部署是一个非常引人注目的选项,因为它消除了 需要管理操作系统和运行时,并提供自动化的 Elastic Provisioning 和基于请求的定价。
  • A serverless deployment isn’t a good fit for every service, because of long-tail latencies and the requirement to use an event/request-based programming model. When it is a good fit, though, serverless deployment is an extremely compelling option because it eliminates the need to administer operating systems and runtimes and provides automated elastic provisioning and request-based pricing.
  • Docker 容器是一种轻量级的操作系统级虚拟化技术,比无服务器部署更灵活 并且具有更可预测的延迟。最好使用 Docker 编排框架,例如 Kubernetes,它管理容器 在计算机集群上。使用容器的缺点是您必须管理操作系统和运行时 很可能是 Docker 编排框架及其运行的 VM。
  • Docker containers, which are a lightweight, OS-level virtualization technology, are more flexible than serverless deployment and have more predictable latency. It’s best to use a Docker orchestration framework such as Kubernetes, which manages containers on a cluster of machines. The drawback of using containers is that you must administer the operating systems and runtimes and most likely the Docker orchestration framework and the VMs that it runs on.
  • 第三个部署选项是将服务部署为虚拟机。一方面,虚拟机是重量级的 deployment 选项,因此部署速度较慢,并且很可能比第二个选项使用更多的资源。另一方面 手动,Amazon EC2 等现代云是高度自动化的,并提供了一组丰富的功能。因此,它有时可能会 使用虚拟机部署小型、简单的应用程序比设置 Docker 编排框架更容易。
  • The third deployment option is to deploy your service as a virtual machine. On one hand, virtual machines are a heavyweight deployment option, so deployment is slower and it will most likely use more resources than the second option. On the other hand, modern clouds such as Amazon EC2 are highly automated and provide a rich set of features. Consequently, it may sometimes be easier to deploy a small, simple application using virtual machines than to set up a Docker orchestration framework.
  • 通常最好避免将服务部署为特定于语言的包,除非您只有少量服务。 例如,如第 13 章所述,在开始微服务之旅时,您可能会使用与 您的整体式应用程序,这很可能是此选项。您应该只考虑设置复杂的部署 基础设施,例如 Kubernetes。
  • Deploying your services as language-specific packages is generally best avoided unless you only have a small number of services. For example, as described in chapter 13, when starting on your journey to microservices you’ll probably deploy the services using the same mechanism you use for your monolithic application, which is most likely this option. You should only consider setting up a sophisticated deployment infrastructure such as Kubernetes once you’ve developed some services.
  • 使用服务网格(一个协调进出服务的所有网络流量的网络层)的众多好处之一是 它使您能够在生产环境中部署服务、对其进行测试,然后才将生产流量路由到它。分离部署 FROM Release 提高了推出新版本服务的可靠性。
  • One of the many benefits of using a service mesh—a networking layer that mediates all network traffic in and out of services—is that it enables you to deploy a service in production, test it, and only then route production traffic to it. Separating deployment from release improves the reliability of rolling out new versions of services.

第 13 章.重构为微服务

Chapter 13. Refactoring to microservices

本章涵盖

This chapter covers

  • 何时将整体式应用程序迁移到微服务架构
  • When to migrate a monolithic application to a microservice architecture
  • 为什么在将整体式应用程序重构为微服务时必须使用增量方法。
  • Why using an incremental approach is essential when refactoring a monolithic application to microservices
  • 将新功能作为服务实现
  • Implementing new features as services
  • 从整体式应用程序中提取服务
  • Extracting services from the monolith
  • 集成服务和整体式
  • Integrating a service and the monolith

我希望这本书能让你对微服务架构、它的优缺点有一个很好的了解,以及 何时使用它。但是,您很有可能正在处理一个大型、复杂的整体式应用程序。您的日常 开发和部署应用程序的体验缓慢而痛苦。微服务,似乎非常适合 你的应用程序,似乎是遥远的必杀技。就像 Mary 和 FTGO 开发团队的其他成员一样,您想知道 地球可以采用微服务架构吗?

I hope that this book has given you a good understanding of the microservice architecture, its benefits and drawbacks, and when to use it. There is, however, a fairly good chance you’re working on a large, complex monolithic application. Your daily experience of developing and deploying your application is slow and painful. Microservices, which appear like a good fit for your application, seem like distant nirvana. Like Mary and the rest of the FTGO development team, you’re wondering how on earth you can adopt the microservice architecture?

幸运的是,您可以使用一些策略来逃离单体地狱,而无需从 抓。通过开发所谓的 strangler 应用程序,您可以逐步将整体式应用转换为微服务。strangler 应用程序的想法 来自扼杀藤蔓,它通过包裹甚至杀死树木而生长在热带雨林中。Strangler 应用程序是一个由微服务组成的新应用程序,您可以通过将新功能实现为服务并提取这些微服务来开发这些微服务 服务。随着时间的推移,随着 strangler 应用程序实现越来越多的功能,它会缩小并 最终杀死了 Monolith。开发 strangler 应用程序的一个重要好处是,与大爆炸重写不同, 它尽早并经常为企业带来价值。

Fortunately, there are strategies you can use to escape from monolithic hell without having to rewrite your application from scratch. You incrementally convert your monolith into microservices by developing what’s known as a strangler application. The idea of a strangler application comes from strangler vines, which grow in rain forests by enveloping and sometimes killing trees. A strangler application is a new application consisting of microservices that you develop by implementing new functionality as services and extracting services from the monolith. Over time, as the strangler application implements more and more functionality, it shrinks and ultimately kills the monolith. An important benefit of developing a strangler application is that, unlike a big bang rewrite, it delivers value to the business early and often.

在本章的开头,我将介绍将整体式架构重构为微服务架构的动机。然后我描述 如何通过将新功能实现为服务并从整体式应用中提取服务来开发 Strangler 应用程序。 接下来,我将介绍各种设计主题,包括如何集成整体式应用和服务,如何维护数据库一致性 跨整体式架构和服务,以及如何处理安全性。在本章的最后,我将介绍几个示例服务。 其中一项服务是 ,它实现了全新的功能。另一个服务是从整体式架构中提取的。我们首先看一下重构为微服务架构的概念。Delayed Order ServiceDelivery Service

I begin this chapter by describing the motivations for refactoring a monolith to a microservice architecture. I then describe how to develop the strangler application by implementing new functionality as services and extracting services from the monolith. Next, I cover various design topics, including how to integrate the monolith and services, how to maintain database consistency across the monolith and services, and how to handle security. I end the chapter by describing a couple of example services. One service is Delayed Order Service, which implements brand new functionality. The other service is Delivery Service, which is extracted from the monolith. Let’s start by taking a look at the concept of refactoring to a microservice architecture.

13.1. 重构为微服务概述

13.1. Overview of refactoring to microservices

设身处地为 Mary 着想。您负责 FTGO 应用程序,这是一个大型的旧整体式应用程序。业务 对 Engineering 无法快速可靠地交付功能感到非常沮丧。FTGO 似乎正在遭受损失 来自一个典型的 Monolithic Hell 案例。至少从表面上看,微服务似乎就是答案。如果您提议 将开发资源从功能开发转移到微服务架构?

Put yourself in Mary’s shoes. You’re responsible for the FTGO application, a large and old monolithic application. The business is extremely frustrated with engineering’s inability to deliver features rapidly and reliably. FTGO appears to be suffering from a classic case of monolithic hell. Microservices seem, at least on the surface, to be the answer. Should you propose diverting development resources away from feature development to migrating to a microservice architecture?

在本节开始时,我将讨论为什么您应该考虑重构为微服务。我还讨论了为什么它很重要 可以肯定的是,你的软件开发问题是因为你身处整体地狱,而不是像个穷人一样 软件开发过程。然后,我将介绍将整体式架构逐步重构为微服务架构的策略。 接下来,我将讨论尽早并经常交付改进的重要性,以保持对业务的支持。 然后,我将介绍为什么在开发一些服务之前,您应该避免投资复杂的部署基础设施。 最后,我将介绍可用于将服务引入架构的各种策略,包括实现 新功能作为服务和从 Monolith 中提取服务。

I start this section by discussing why you should consider refactoring to microservices. I also discuss why it’s important to be sure that your software development problems are because you’re in monolithic hell rather than in, for example, a poor software development process. I then describe strategies for incrementally refactoring your monolith to a microservice architecture. Next, I discuss the importance of delivering improvements earlier and often in order to maintain the support of the business. I then describe why you should avoid investing in a sophisticated deployment infrastructure until you’ve developed a few services. Finally, I describe the various strategies you can use to introduce services into your architecture, including implementing new features as services and extracting services from the monolith.

13.1.1. 为什么要重构单体式架构?

13.1.1. Why refactor a monolith?

第 1 章所述,微服务架构具有许多优势。它具有更好的可维护性、可测试性和可部署性,因此可以加快开发速度。这 微服务架构更具可扩展性,并提高了故障隔离能力。发展您的技术堆栈也容易得多。 但是,将 Mono Split 重构为微服务是一项艰巨的任务。它将从新功能开发中转移资源。因此,它是 可能只有在解决重大业务问题时,企业才会支持采用微服务。

The microservice architecture has, as described in chapter 1, numerous benefits. It has much better maintainability, testability, and deployability, so it accelerates development. The microservice architecture is more scalable and improves fault isolation. It’s also much easier to evolve your technology stack. But refactoring a monolith to microservices is a significant undertaking. It will divert resources away from new feature development. As a result, it’s likely that the business will only support the adoption of microservices if it solves a significant business problem.

如果您身处 mono-lith 地狱,则很可能已经遇到至少一个业务问题。以下是一些商业示例 Monolithic Hell 引起的问题:

If you’re in monolithic hell, it’s likely that you already have at least one business problem. Here are some examples of business problems caused by monolithic hell:

  • 交货慢该应用程序难以理解、维护和测试,因此开发人员的工作效率较低。因此,该组织 无法有效竞争,并有被竞争对手超越的风险。
  • Slow deliveryThe application is difficult to understand, maintain, and test, so developer productivity is low. As a result, the organization is unable to compete effectively and risks being overtaken by competitors.
  • 有缺陷的软件版本缺乏可测试性意味着软件版本经常有问题。这会让客户不满意,从而导致损失 客户和收入减少。
  • Buggy software releasesThe lack of testability means that software releases are often buggy. This makes customers unhappy, which results in losing customers and reduced revenue.
  • 可扩展性差扩展整体式应用程序很困难,因为它将资源需求截然不同的模块组合成一个 可执行组件。缺乏可伸缩性意味着扩展应用程序要么不可能,要么成本高得令人望而却步 超过某个点。因此,应用程序无法支持业务的当前或预测需求。
  • Poor scalabilityScaling a monolithic application is difficult because it combines modules with very different resource requirements into one executable component. The lack of scalability means that it’s either impossible or prohibitively expensive to scale the application beyond a certain point. As a result, the application can’t support the current or predicted needs of the business.

请务必确保这些问题存在,因为您的架构已经无法满足需求。速度慢的常见原因 交付和有缺陷的发布是一个糟糕的软件开发过程。例如,如果您仍然依赖手动测试, 然后,单独采用自动化测试可以显著提高开发速度。同样,您有时可以求解 可扩展性问题。您应该首先尝试更简单的解决方案。当且仅当,您仍然 如果随后迁移到微服务架构,则存在软件交付问题。让我们看看如何做到这一点。

It’s important to be sure that these problems are there because you’ve outgrown your architecture. A common reason for slow delivery and buggy releases is a poor software development process. For example, if you’re still relying on manual testing, then adopting automated testing alone can significantly increase development velocity. Similarly, you can sometimes solve scalability problems without changing your architecture. You should first try simpler solutions. If, and only if, you still have software delivery problems should you then migrate to the microservice architecture. Let’s look at how to do that.

13.1.2. 扼杀 Monolith

13.1.2. Strangling the monolith

将整体式应用程序转换为微服务的过程是应用程序现代化 (https://en.wikipedia.org/wiki/Software_modernization) 的一种形式。应用程序现代化是将旧式应用程序转换为具有现代架构和技术堆栈的应用程序的过程。开发人员已经 几十年来一直在实现应用程序的现代化。因此,我们在重构时可以使用通过经验积累的智慧 将应用程序集成到微服务架构中。多年来学到的最重要的一课是不要大爆炸 重写。

The process of transforming a monolithic application into microservices is a form of application modernization (https://en.wikipedia.org/wiki/Software_modernization). Application modernization is the process of converting a legacy application to one having a modern architecture and technology stack. Developers have been modernizing applications for decades. As a result, there is wisdom accumulated through experience we can use when refactoring an application into a microservice architecture. The most important lesson learned over the years is to not do a big bang rewrite.

大爆炸式重写是指您从头开始开发新应用程序(在本例中为基于微服务的应用程序)。虽然从 Scratch 和留下遗留代码库听起来很吸引人,但风险极大,并且可能会以失败告终。你 将花费数月甚至数年的时间来复制现有功能,只有这样,您才能实现 今天的业务需求!此外,您仍然需要开发遗留应用程序,这会分散重写的精力 ,意味着您有一个不断移动的目标。此外,您可能会浪费时间重新实现不再需要的功能。据报道,正如马丁·福勒 (Martin Fowler) 所说,“唯一的事情 Big Bang 重写保证就是 Big Bang!(www.randyshoup.com/evolutionary-architecture)。

A big bang rewrite is when you develop a new application—in this case, a microservices-based application—from scratch. Although starting from scratch and leaving the legacy code base behind sounds appealing, it’s extremely risky and will likely end in failure. You will spend months, possibly years, duplicating the existing functionality, and only then can you implement the features that the business needs today! Also, you’ll need to develop the legacy application anyway, which diverts effort away from the rewrite and means that you have a constantly moving target. What’s more, it’s possible that you’ll waste time reimplementing features that are no longer needed. As Martin Fowler reportedly said, “the only thing a Big Bang rewrite guarantees is a Big Bang!” (www.randyshoup.com/evolutionary-architecture).

如图 13.1 所示,您应该逐步重构您的整体式应用程序,而不是进行大爆炸式的重写。您逐渐构建一个新的应用程序,称为 strangler 应用。它由与您的整体式应用程序一起运行的微服务组成。随着时间的推移,数量 由整体式应用程序实现的功能会缩小,直到它完全消失或变得只是 另一个微服务。这种策略类似于以 70 英里/小时的速度在高速公路上行驶时维修您的汽车。这很有挑战性, 但比尝试 Big Bang 重写的风险要小得多。

Instead of doing a big bang rewrite, you should, as figure 13.1 shows, incrementally refactor your monolithic application. You gradually build a new application, which is called a strangler application. It consists of microservices that runs in conjunction with your monolithic application. Over time, the amount of functionality implemented by the monolithic application shrinks until either it disappears entirely or it becomes just another microservice. This strategy is akin to servicing your car while driving down the highway at 70 mph. It’s challenging, but is far less risky that attempting a big bang rewrite.

图 13.1.整体式应用程序将逐步替换为由服务组成的 strangler 应用程序。最终,整体式应用被替换 完全由 strangler 应用程序创建,或者成为另一个微服务。

Martin Fowler 将此应用程序现代化策略称为 Strangler 应用程序模式 (www.martinfowler.com/bliki/StranglerApplication.html)。这个名字来自热带雨林中发现的 strangler 藤蔓(或 strangler fig — 见 https://en.wikipedia.org/wiki/Strangler_fig)。扼杀藤蔓生长在树周围,以便到达森林树冠上方的阳光。树通常会死去,因为它要么被葡萄藤杀死,要么死了 年老,留下一棵树状的藤蔓。

Martin Fowler refers to this application modernization strategy as the Strangler application pattern (www.martinfowler.com/bliki/StranglerApplication.html). The name comes from the strangler vine (or strangler fig—see https://en.wikipedia.org/wiki/Strangler_fig) that is found in rain forests. A strangler vine grows around a tree in order to reach the sunlight above the forest canopy. Often the tree dies, because either it’s killed by the vine or it dies of old age, leaving a tree-shaped vine.

模式:Strangler 应用程序

通过围绕旧应用程序逐步开发新的 (strangler) 应用程序,实现应用程序的现代化。请参阅 http://microservices.io/patterns/refactoring/strangler-application.html

Modernize an application by incrementally developing a new (strangler) application around the legacy application. See http://microservices.io/patterns/refactoring/strangler-application.html.

重构过程通常需要数月或数年。例如,根据 Steve Yegge(https://plus.google.com/+RipRowan/posts/eVeouesvaVX 年)的说法,Amazon.com 花了几年时间来重构其整体。对于非常大的系统,您可能永远无法完成 过程。例如,你可能会达到一个地步,你有比分解整体更重要的任务。 例如实施创收功能。如果 Mono Lith 不是持续开发的障碍,您不妨 别管它。

The refactoring process typically takes months, or years. For example, according to Steve Yegge (https://plus.google.com/+RipRowan/posts/eVeouesvaVX) it took Amazon.com a couple of years to refactor its monolith. In the case of a very large system, you may never complete the process. You could, for example, get to a point where you have tasks that are more important than breaking up the monolith, such as implementing revenue-generating features. If the monolith isn’t an obstacle to ongoing development, you may as well leave it alone.

尽早并经常展示价值

增量重构到微服务架构的一个重要好处是,您可以立即获得 投资。这与大爆炸重写非常不同,后者在完成之前不会带来任何好处。当增量 重构整体式架构后,您可以使用新技术堆栈和现代、高速的 DevOps 风格来开发每个新服务 开发和交付过程。因此,您团队的交付速度会随着时间的推移而稳步提高。

An important benefit of incrementally refactoring to a microservice architecture is that you get an immediate return on your investment. That’s very different than a big bang rewrite, which doesn’t deliver any benefit until it’s complete. When incrementally refactoring the monolith, you can develop each new service using a new technology stack and a modern, high-velocity, DevOps-style development and delivery process. As a result, your team’s delivery velocity steadily increases over time.

此外,您可以先将应用程序的高价值区域迁移到微服务。例如,假设您是 正在开发 FTGO 应用程序。例如,企业可能会决定配送调度算法是一个关键的竞争 优势。交付管理很可能是一个持续发展的领域。通过提取投放管理 集成到一个独立的服务中,交付管理团队将能够独立于其他 FTGO 开发人员工作 并显著提高他们的开发速度。他们将能够频繁部署新版本的算法 并评估它们的有效性。

What’s more, you can migrate the high-value areas of your application to microservices first. For instance, imagine you’re working on the FTGO application. The business might, for example, decide that the delivery scheduling algorithm is a key competitive advantage. It’s likely that delivery management will be an area of constant, ongoing development. By extracting delivery management into a standalone service, the delivery management team will be able to work independently of the rest of the FTGO developers and significantly increase their development velocity. They’ll be able to frequently deploy new versions of the algorithm and evaluate their effectiveness.

能够更早地交付价值的另一个好处是,它有助于维护企业对迁移的支持 努力。他们的持续支持是必不可少的,因为重构工作意味着花在开发上的时间更少 特征。一些组织难以消除技术债务,因为过去的尝试过于雄心勃勃,而且没有 提供很多好处。因此,该企业变得不愿意投资于进一步的清理工作。增量性质 重构为微服务意味着开发团队能够尽早并经常展示价值。

Another benefit of being able to deliver value earlier is that it helps maintain the business’s support for the migration effort. Their ongoing support is essential, because the refactoring effort will mean that less time is spent on developing features. Some organizations have difficulty eliminating technical debt because past attempts were too ambitious and didn’t provide much benefit. As a result, the business becomes reluctant to invest in further cleanup efforts. The incremental nature of refactoring to microservices means that the development team is able to demonstrate value early and often.

最大限度地减少对整体式应用的更改

本章中反复出现的一个主题是,在迁移到微服务时,应避免对整体式应用进行广泛的更改 建筑。您不可避免地需要进行一些更改,以支持迁移到服务。Section 13.3.2 讨论了如何经常需要修改 Monolith 以使其可以参与维护数据一致性的 Sagas 跨整体式架构和服务。对 MonoLith 进行广泛更改的问题在于它非常耗时, 成本高昂且风险大。毕竟,这可能就是您首先要迁移到微服务的原因。

A recurring theme in this chapter is that you should avoid making widespread changes to the monolith when migrating to a microservice architecture. It’s inevitable that you’ll need to make some changes in order to support migration to services. Section 13.3.2 talks about how the monolith often needs to be modified so that it can participate in sagas that maintain data consistency across the monolith and services. The problem with making widespread changes to the monolith is that it’s time consuming, costly, and risky. After all, that’s probably why you want to migrate to microservices in the first place.

幸运的是,您可以使用一些策略来缩小您需要进行的更改的范围。例如,在 Section 13.2.3 中,我描述了将数据从提取的服务复制回 Monolith 数据库的策略。在 Section 13.3.2 中,我将展示如何仔细地对服务的提取进行排序,以减少对整体式应用的影响。通过应用这些 策略,您可以减少重构整体式架构所需的工作量。

Fortunately, there are strategies you can use for reducing the scope of the changes you need to make. For example, in section 13.2.3, I describe the strategy of replicating data from an extracted service back to the monolith’s database. And in section 13.3.2, I show how you can carefully sequence the extraction of services to reduce the impact on the monolith. By applying these strategies, you can reduce the amount of work required to refactor the monolith.

技术部署基础设施:您还不需要所有这些

在本书中,我讨论了许多闪亮的新技术,包括 Kubernetes 和 AWS 等部署平台 Lambda 和服务发现机制。您可能很想通过选择技术来开始迁移到微服务 并建立该基础设施。您甚至可能会感受到来自业务人员和友好的 PaaS 供应商的压力 开始在这种基础设施上花钱。

Throughout this book I’ve discussed a lot of shiny new technology, including deployment platforms such as Kubernetes and AWS Lambda and service discovery mechanisms. You might be tempted to begin your migrating to microservices by selecting technologies and building out that infrastructure. You might even feel pressure from the business people and from your friendly PaaS vendor to start spending money on this kind of infrastructure.

尽管预先构建此基础设施似乎很诱人,但我建议只对 发展它。您唯一不能缺少的是执行自动化测试的部署管道。例如 如果您只有少数服务,则不需要复杂的部署和可观测性基础设施。最初 您甚至可以只使用硬编码的配置文件进行服务发现。我建议推迟任何决定 关于涉及大量投资的技术基础设施,直到您获得微服务的实际经验 建筑。只有当您运行了一些服务时,您才会有选择技术的经验。

As tempting as it seems to build out this infrastructure up front, I recommend only making a minimal up-front investment in developing it. The only thing you can’t live without is a deployment pipeline that performs automating testing. For example, if you only have a handful of services, you don’t need a sophisticated deployment and observability infrastructure. Initially, you can even get away with just using a hard-coded configuration file for service discovery. I suggest deferring any decisions about technical infrastructure that involve significant investment until you’ve gained real experience with the microservice architecture. It’s only once you have a few services running that you’ll have the experience to pick technologies.

现在,让我们看看可用于迁移到微服务架构的策略。

Let’s now look at the strategies you can use for migrating to a microservice architecture.

13.2. 将单体式架构重构为微服务的策略

13.2. Strategies for refactoring a monolith to microservices

扼杀整体式应用并逐步将其替换为微服务有三种主要策略:

There are three main strategies for strangling the monolith and incrementally replacing it with microservices:

  1. 将新功能作为服务实现。
  2. Implement new features as services.
  3. 将表示层和后端分开。
  4. Separate the presentation tier and backend.
  5. 通过将功能提取到服务中来打破整体式架构。
  6. Break up the monolith by extracting functionality into services.

第一种策略阻止 Monolith 增长。它通常是展示微服务价值的快速方法。 帮助构建对迁移工作的支持。其他两种策略打破了整体。在重构 Monolith 时, 您有时可能会使用第二种策略,但您肯定会使用第三种策略,因为这是将功能从整体迁移到 Strangler 应用程序的方式。

The first strategy stops the monolith from growing. It’s typically a quick way to demonstrate the value of microservices, helping build support for the migration effort. The other two strategies break apart the monolith. When refactoring your monolith, you might sometimes use the second strategy, but you’ll definitely use the third strategy, because it’s how functionality is migrated from the monolith into the strangler application.

让我们看一下这些策略中的每一个,从将新功能实现为服务开始。

Let’s take a look at each of these strategies, starting with implementing new features as services.

13.2.1. 将新功能实现为服务

13.2.1. Implement new features as services

洞定律指出,“如果你发现自己在一个洞里,就停止挖掘”(https://en.m.wikipedia.org/wiki/Law_of_holes)。当您的整体式应用程序变得难以管理时,这是一个很好的建议。换句话说,如果你有一个 大型、复杂的整体式应用程序,不要通过向整体式应用程序添加代码来实现新功能。这将使你的 Monolith 更大,更难控制。相反,您应该将新功能作为服务实现。

The Law of Holes states that “if you find yourself in a hole, stop digging” (https://en.m.wikipedia.org/wiki/Law_of_holes). This is great advice to follow when your monolithic application has become unmanageable. In other words, if you have a large, complex monolithic application, don’t implement new features by adding code to the monolith. That will make your monolith even larger and more unmanageable. Instead, you should implement new features as services.

这是开始将整体式应用程序迁移到微服务架构的好方法。它降低了生长速率 的巨石。它加速了新功能的开发,因为您正在使用全新的代码进行开发 基础。它还快速展示了采用微服务架构的价值。

This is a great way to begin migrating your monolithic application to a microservice architecture. It reduces the growth rate of the monolith. It accelerates the development of the new features, because you’re doing development in a brand new code base. It also quickly demonstrates the value of adopting the microservice architecture.

将新服务与整体式系统集成

图 13.2 显示了将新功能实现为服务后的应用程序架构。除了新服务和单体式应用之外, 该体系结构包括将服务集成到应用程序中的另外两个元素:

Figure 13.2 shows the application’s architecture after implementing a new feature as a service. Besides the new service and monolith, the architecture includes two other elements that integrate the service into the application:

  • API 网关将新功能请求路由到新服务,并将旧请求路由到整体式应用程序。
  • API gatewayRoutes requests for new functionality to the new service and routes legacy requests to the monolith.
  • 集成胶水代码将服务与整体式应用集成。它使服务能够访问整体式架构拥有的数据并调用功能 由 Monolith 实现。
  • Integration glue codeIntegrates the service with the monolith. It enables the service to access data owned by the monolith and to invoke functionality implemented by the monolith.
图 13.2.新功能作为 strangler 应用程序的一部分的服务实现。集成胶水集成了服务 使用整体式应用程序,由实现同步和异步 API 的适配器组成。API 网关路由请求 调用服务的新功能。

集成粘附代码不是一个独立的组件。相反,它由整体式应用程序中的适配器和 使用一个或多个进程间通信机制。例如,第 13.4.1 节中描述的 集成胶水同时使用 REST 和域事件。该服务通过调用 REST 从整体式架构中检索客户合同信息 应用程序接口。Monolith 发布域事件,以便可以跟踪无法按时交付的订单的状态并做出响应。Section 13.3.1 更详细地描述了 integration glue 代码。Delayed Delivery ServiceOrderDelayed Delivery ServiceOrders

The integration glue code isn’t a standalone component. Instead, it consists of adapters in the monolith and the service that use one or more interprocess communication mechanisms. For example, integration glue for Delayed Delivery Service, described in section 13.4.1, uses both REST and domain events. The service retrieves customer contract information from the monolith by invoking a REST API. The monolith publishes Order domain events so that Delayed Delivery Service can track the state of Orders and respond to orders that won’t be delivered on time. Section 13.3.1 describes the integration glue code in more detail.

何时将新功能实现为服务

理想情况下,您应该在 strangler 应用程序中实现每个新功能,而不是在 Monolith 中实现。您将实现 新功能作为新服务或现有服务的一部分。这样,您将避免接触 Monolith 代码库。但遗憾的是,并非每个新功能都可以作为服务实现。

Ideally, you should implement every new feature in the strangler application rather than in the monolith. You’ll implement a new feature as either a new service or as part of an existing service. This way you’ll avoid ever having to touch the monolith code base. Unfortunately, though, not every new feature can be implemented as a service.

这是因为微服务架构的本质是一组围绕业务组织的松散耦合的服务 能力。例如,某个功能可能太小而无法成为有意义的服务。例如,您可能只需要 向现有类添加一些字段和方法。或者新功能可能与整体式架构中的代码耦合得太紧密。 如果您尝试将此类功能作为服务实现,您通常会发现性能会受到影响,因为 过度的进程间通信。您可能还在保持数据一致性时遇到问题。如果新功能不能 作为服务实现,解决方案通常是在整体式应用中最初实现新功能。稍后,您可以 然后将该功能与其他相关功能一起提取到他们自己的服务中。

That’s because the essence of a microservice architecture is a set of loosely coupled services that are organized around business capabilities. A feature might, for instance, be too small to be a meaningful service. You might, for example, just need to add a few fields and methods to an existing class. Or the new feature might be too tightly coupled to the code in the monolith. If you attempted to implement this kind of feature as a service you would typically find that performance would suffer because of excessive interprocess communication. You might also have problems maintaining data consistency. If a new feature can’t be implemented as a service, the solution is often to initially implement the new feature in the monolith. Later on, you can then extract that feature along with other related features into their own service.

将新功能作为服务实现可加速这些功能的开发。这是快速演示的好方法 微服务架构的价值。它还会降低天塔柱的生长速度。但最终,你需要打破 使用其他两种策略将 MonoLith 分开。您需要通过提取 功能从 MonoLith 到 Services。您还可以通过拆分整体式应用来提高开发速度 水平。让我们看看如何做到这一点。

Implementing new features as services accelerates the development of those features. It’s a good way to quickly demonstrate the value of the microservice architecture. It also reduces the monolith’s growth rate. But ultimately, you need to break apart the monolith using the two other strategies. You need to migrate functionality to the strangler application by extracting functionality from the monolith into services. You might also be able to improve development velocity by splitting the monolith horizontally. Let’s look at how to do that.

13.2.2. 将表示层与后端分开

13.2.2. Separate presentation tier from the backend

缩小整体式应用程序的一种策略是将表示层与业务逻辑和数据访问分开 层。典型的企业应用程序由以下层组成:

One strategy for shrinking a monolithic application is to split the presentation layer from the business logic and data access layers. A typical enterprise application consists of the following layers:

  • 表示逻辑这包括处理 HTTP 请求和生成实现 Web UI 的 HTML 页面的模块。在 具有复杂的用户界面,表示层通常是大量的代码体。
  • Presentation logicThis consists of modules that handle HTTP requests and generate HTML pages that implement a web UI. In an application that has a sophisticated user interface, the presentation tier is often a substantial body of code.
  • 业务逻辑它由实现业务规则的模块组成,这些规则在企业应用程序中可能很复杂。
  • Business logicThis consists of modules that implement the business rules, which can be complex in an enterprise application.
  • 数据访问逻辑这包括访问基础设施服务(如数据库和消息代理)的模块。
  • Data access logicThis consists of modules that access infrastructure services such as databases and message brokers.

表示逻辑与业务和数据访问逻辑之间通常有明确的分离。业务层 具有一个粗粒度的 API,该 API 由封装业务逻辑的一个或多个 Facade 组成。这个 API 是一个自然的接缝 沿着它,您可以将 Monolith 拆分为两个较小的应用程序,如图 13.3 所示。一个应用程序包含表示层,另一个应用程序包含业务和数据访问逻辑。拆分后, Presentation Logic 应用程序对 Business Logic 应用程序进行远程调用。

There is usually a clean separation between the presentation logic and the business and data access logic. The business tier has a coarse-grained API consisting of one or more facades that encapsulate the business logic. This API is a natural seam along which you can split the monolith into two smaller applications, as shown in figure 13.3. One application contains the presentation layer, and the other contains the business and data access logic. After the split, the presentation logic application makes remote calls to the business logic application.

图 13.3.将前端与后端分开,可以独立部署每个 VPN。它还向 调用。

以这种方式拆分 Monolith 有两个主要好处。它使您能够开发、部署和扩展这两个应用程序 彼此独立。特别是,它允许表示层开发人员在用户界面上快速迭代 以及轻松执行 A/B 测试,例如,无需部署后端。这种方法的另一个好处是 它公开了一个远程 API,您稍后开发的微服务可以调用该 API。

Splitting the monolith in this way has two main benefits. It enables you to develop, deploy, and scale the two applications independently of one another. In particular, it allows the presentation layer developers to rapidly iterate on the user interface and easily perform A/B testing, for example, without having to deploy the backend. Another benefit of this approach is that it exposes a remote API that can be called by the microservices you develop later.

但这种策略只是部分解决方案。很可能至少一个或两个生成的应用程序会 仍然是一个难以管理的巨石。您需要使用第三种策略将 Monolith 替换为 Services。

But this strategy is only a partial solution. It’s very likely that at least one or both of the resulting applications will still be an unmanageable monolith. You need to use the third strategy to replace the monolith with services.

13.2.3. 将业务能力提取到服务中

13.2.3. Extract business capabilities into services

将新功能实现为服务并将前端 Web 应用程序与后端拆分只能让您走到这一步。 您最终仍将在整体式代码库中进行大量开发。如果您想显著提高应用程序的 架构并提高开发速度,您需要通过逐步迁移业务来打破整体式 从整体式应用到服务的功能。例如,第 13.5 节描述了如何将 Delivery Management 从 FTGO 整体提取到新的 .当您使用此策略时,随着时间的推移,服务实现的业务功能数量会增加,并且 Monolith 逐渐缩小。Delivery Service

Implementing new features as services and splitting the frontend web application from the backend will only get you so far. You’ll still end up doing a lot of development in the monolithic code base. If you want to significantly improve your application’s architecture and increase your development velocity, you need to break apart the monolith by incrementally migrating business capabilities from the monolith to services. For example, section 13.5 describes how to extract delivery management from the FTGO monolith into a new Delivery Service. When you use this strategy, over time the number of business capabilities implemented by the services grows, and the monolith gradually shrinks.

您希望提取到服务中的功能是穿过整体的垂直切片。该切片由以下内容组成:

The functionality you want extract into a service is a vertical slice through the monolith. The slice consists of the following:

  • 实现 API 端点的入站适配器
  • Inbound adapters that implement API endpoints
  • 域逻辑
  • Domain logic
  • 出站适配器,例如数据库访问逻辑
  • Outbound adapters such as database access logic
  • 整体式架构
  • The monolith’s database schema

如图 13.4 所示,此代码是从 Monolith 中提取的,并移动到独立服务中。API 网关路由 将提取的业务功能调用到服务,并将其他请求路由到整体式应用。Monolith 和 service 通过 Integration Glue 代码进行协作。如第 13.3.1 节所述,集成胶水由服务中的适配器和使用一个或多个进程间通信 (IPC) 的单体组成 机制。

As figure 13.4 shows, this code is extracted from the monolith and moved into a standalone service. An API gateway routes requests that invoke the extracted business capability to the service and routes the other requests to the monolith. The monolith and the service collaborate via the integration glue code. As described in section 13.3.1, the integration glue consists of adapters in the service and monolith that use one or more interprocess communication (IPC) mechanisms.

图 13.4.通过提取服务来拆分整体式架构。确定一个功能切片,它由业务逻辑和 adapters,以提取到服务中。您将该代码移动到服务中。新提取的服务和整体式应用程序协作 通过集成胶水提供的 API。

提取服务具有挑战性。您需要确定如何将整体的域模型拆分为两个单独的域 models,其中一个将成为服务的 domain model。您需要断开依赖项,例如对象引用。你可以 甚至需要拆分类才能将功能移动到服务中。您还需要重构数据库。

Extracting services is challenging. You need to determine how to split the monolith’s domain model into two separate domain models, one of which becomes the service’s domain model. You need to break dependencies such as object references. You might even need to split classes in order to move functionality into the service. You also need to refactor the database.

提取服务通常很耗时,尤其是因为整体式架构的代码库可能很混乱。因此 您需要仔细考虑要提取哪些服务。专注于重构应用程序的这些部分非常重要 这提供了很大的价值。在提取服务之前,问问自己这样做有什么好处。

Extracting a service is often time consuming, especially because the monolith’s code base is likely to be messy. Consequently, you need to carefully think about which services to extract. It’s important to focus on refactoring those parts of the application that provide a lot of value. Before extracting a service, ask yourself what the benefit is of doing that.

例如,提取一个服务是值得的,该服务实现了对业务至关重要且持续的功能 进化。当这样做没有太多好处时,投入精力提取服务是没有价值的。后来 本节介绍了一些用于确定要提取的内容和时间的策略。但首先,让我们更详细地了解 提取服务时将面临的一些挑战以及如何解决这些挑战。

For example, it’s worthwhile to extract a service that implements functionality that’s critical to the business and constantly evolving. It’s not valuable to invest effort in extracting services when there’s not much benefit from doing so. Later in this section I describe some strategies for determining what to extract and when. But first, let’s look in more detail at some of the challenges you’ll face when extracting a service and how to address them.

在提取服务时,您将遇到一些挑战:

You’ll encounter a couple of challenges when extracting a service:

  • 拆分域模型
  • Splitting the domain model
  • 重构数据库
  • Refactoring the database

让我们看看每一个,从拆分域模型开始。

Let’s look at each one, starting with splitting the domain model.

拆分域模型

为了提取服务,您需要从整体式应用的域模型中提取其域模型。您需要执行 拆分域模型的大手术。您将遇到的一个挑战是消除原本会 跨服务边界。保留在整体式架构中的类可能会引用已移动的类 到服务,反之亦然。例如,假设如图 13.5 所示,您提取 ,结果它的类引用了 monolith 的类。因为服务实例通常是一个进程,所以跨服务的对象引用是没有意义的 边界。不知何故,您需要消除这些类型的对象引用。Order ServiceOrderRestaurant

In order to extract a service, you need to extract its domain model out of the monolith’s domain model. You’ll need to perform major surgery to split the domain models. One challenge you’ll encounter is eliminating object references that would otherwise span service boundaries. It’s possible that classes that remain in the monolith will reference classes that have been moved to the service or vice versa. For example, imagine that, as figure 13.5 shows, you extract Order Service, and as a result its Order class references the monolith’s Restaurant class. Because a service instance is typically a process, it doesn’t make sense to have object references that cross service boundaries. Somehow you need to eliminate these types of object reference.

图 13.5.domain 类具有对类的引用。如果我们提取到一个单独的服务中,我们需要对它对 的引用做一些事情,因为进程之间的对象引用没有意义。OrderRestaurantOrderRestaurant

解决此问题的一个好方法是根据 DDD 聚合进行考虑,如第 5 章所述。聚合使用主键而不是对象引用相互引用。因此,您可以将 和 类视为聚合,如图 13.6 所示,将类中的引用替换为存储主键值的字段。OrderRestaurantRestaurantOrderrestaurantId

One good way to solve this problem is to think in terms of DDD aggregates, described in chapter 5. Aggregates reference each other using primary keys rather than object references. You would, therefore, think of the Order and Restaurant classes as aggregates and, as figure 13.6 shows, replace the reference to Restaurant in the Order class with a restaurantId field that stores the primary key value.

图 13.6.该类的引用将替换为 的主键,以消除跨越进程边界的对象。OrderRestaurantRestaurant

将对象引用替换为主键的一个问题是,尽管这是对类的一个小更改,但它可以 可能会对类的客户端产生很大影响,这些客户端需要对象引用。在本节的后面部分,我将描述 如何通过在服务和 Monolith 之间复制数据来缩小更改的范围。,例如,可以定义一个类,该类是整体式应用的类的副本。Delivery ServiceRestaurantRestaurant

One issue with replacing object references with primary keys is that although this is a minor change to the class, it can potentially have a large impact on the clients of the class, which expect an object reference. Later in this section, I describe how to reduce the scope of the change by replicating data between the service and monolith. Delivery Service, for example, could define a Restaurant class that’s a replica of the monolith’s Restaurant class.

提取服务通常比将整个类移动到服务中要复杂得多。更大的挑战 拆分域模型是提取嵌入在具有其他职责的类中的功能。此问题 经常出现在第 2 章中描述的 God 类中,这些类具有过多的责任。例如,该类是 FTGO 应用程序中的 god 类之一。它实现了多种业务能力,包括订单管理、 delivery management 等。稍后在 13.5 节中,我将讨论将交付管理提取到服务中如何涉及从类中提取类。该实体实现之前与类中的其他功能捆绑在一起的投放管理功能。OrderDeliveryOrderDeliveryOrder

Extracting a service is often much more involved than moving entire classes into a service. An even greater challenge with splitting a domain model is extracting functionality that’s embedded in a class that has other responsibilities. This problem often occurs in god classes, described in chapter 2, that have an excessive number of responsibilities. For example, the Order class is one of the god classes in the FTGO application. It implements multiple business capabilities, including order management, delivery management, and so on. Later in section 13.5, I discuss how extracting the delivery management into a service involves extracting a Delivery class from the Order class. The Delivery entity implements the delivery management functionality that was previously bundled with other functionality in the Order class.

重构数据库

拆分域模型涉及的不仅仅是更改代码。域模型中的许多类都是持久的。他们的领域 映射到数据库架构。因此,当您从整体式应用程序中提取服务时,您也会移动数据。你 需要将表从 Monolith 的数据库移动到服务的数据库。

Splitting a domain model involves more than just changing code. Many classes in a domain model are persistent. Their fields are mapped to a database schema. Consequently, when you extract a service from the monolith, you’re also moving data. You need to move tables from the monolith’s database to the service’s database.

此外,在拆分实体时,您需要拆分相应的数据库表并将新表移动到服务中。为 例如,在将 Delivery Management 提取到服务中时,您拆分实体并提取实体。在数据库级别,您拆分表并定义一个新表。然后,将表移动到服务中。OrderDeliveryORDERSDELIVERYDELIVERY

Also, when you split an entity you need to split the corresponding database table and move the new table to the service. For example, when extracting delivery management into a service, you split the Order entity and extract a Delivery entity. At the database level, you split the ORDERS table and define a new DELIVERY table. You then move the DELIVERY table to the service.

Scott W. Ambler 和 Pramod J. Sadalage (Addison-Wesley, 2011) 合著的《重构数据库》一书介绍了数据库架构的一组重构。为 example,它描述了 Split Table 重构,它将一个表拆分为两个或多个表。该书中的许多技术在提取时都很有用 服务。其中一种技术是复制数据以允许增量更新 客户端使用新架构。我们可以调整这个想法,以减少您必须进行的更改范围 提取服务时的整体式架构。

The book Refactoring Databases by Scott W. Ambler and Pramod J. Sadalage (Addison-Wesley, 2011) describes a set of refactorings for a database schema. For example, it describes the Split Table refactoring, which splits a table into two or more tables. Many of the technique in that book are useful when extracting services from the monolith. One such technique is the idea of replicating data in order to allow you to incrementally update clients of the database to use the new schema. We can adapt that idea to reduce the scope of the changes you must make to the monolith when extracting a service.

复制数据以避免广泛的更改

如前所述,提取服务需要您更改为整体式的域模型。例如,将 object 具有主键和拆分类的引用。这些类型的更改可能会在整个代码库中产生涟漪,并要求您 对整体式应用进行广泛的更改。例如,如果拆分实体并提取实体,则必须更改代码中引用已移动字段的每个位置。制作这些种类 的更改可能非常耗时,并且可能成为打破整体式应用的巨大障碍。OrderDelivery

As mentioned, extracting a service requires you to change to the monolith’s domain model. For example, you replace object references with primary keys and split classes. These types of changes can ripple through the code base and require you to make widespread changes to the monolith. For example, if you split the Order entity and extract a Delivery entity, you’ll have to change every place in the code that references the fields that have been moved. Making these kinds of changes can be extremely time consuming and can become a huge barrier to breaking up the monolith.

延迟并可能避免进行这些昂贵的更改的一种好方法是使用类似于 一个在 重构数据库 中描述的。重构数据库的一个主要障碍是将该数据库的所有客户端更改为使用新架构。解决方案 书中提出的是将原始 schema 保留一个过渡期,并使用触发器来同步原始 和新架构。然后,随着时间的推移,将客户端从旧架构迁移到新架构。

A great way to delay and possibly avoid making these kinds of expensive changes is to use an approach that’s similar to the one described in Refactoring Databases. A major obstacle to refactoring a database is changing all the clients of that database to use the new schema. The solution proposed in the book is to preserve the original schema for a transition period and use triggers to synchronize the original and new schemas. You then migrate clients from the old schema to the new schema over time.

在从整体中提取服务时,我们可以使用类似的方法。例如,在提取实体时,我们在过渡期内保持实体基本不变。如图 13.7 所示,我们将与投放相关的字段设为只读,并通过将数据复制回整体式应用来使它们保持最新。因此,我们只需要在 Monolith 的代码中找到更新这些字段并更改 他们来调用新的 .DeliveryOrderDelivery ServiceDelivery Service

We can use a similar approach when extracting services from the monolith. For example, when extracting the Delivery entity, we leave the Order entity mostly unchanged for a transition period. As figure 13.7 shows, we make the delivery-related fields read-only and keep them up-to-date by replicating data from Delivery Service back to the monolith. As a result, we only need to find the places in the monolith’s code that update those fields and change them to invoke the new Delivery Service.

图 13.7.通过将交付相关数据从新提取的数据复制回整体式架构的数据库,最大限度地减少对 FTGO 整体式架构的更改范围。Delivery Service

通过复制数据来保留实体的结构,可以显著减少我们需要立即完成的工作量。随着时间的推移,我们可以将使用投放相关实体字段或表列的代码迁移到 。更重要的是,我们可能永远不需要在整体式架构中进行这种更改。如果随后提取了该代码 导入到服务中,则服务可以访问 .OrderDelivery ServiceOrderORDERSDelivery ServiceDelivery Service

Preserving the structure of the Order entity by replicating data from Delivery Service significantly reduces the amount of work we need to do immediately. Over time, we can migrate code that uses the delivery-related Order entity fields or ORDERS table columns to Delivery Service. What’s more, it’s possible that we never need to make that change in the monolith. If that code is subsequently extracted into a service, then the service can access Delivery Service.

要提取的服务以及何时提取

正如我提到的,拆分 Monolith 非常耗时。它分散了实现功能的工作。因此, 您必须仔细决定提取服务的顺序。您需要专注于提取提供 最大的好处。此外,您希望不断向业务部门证明迁移到微服务的价值 建筑。

As I mentioned, breaking apart the monolith is time consuming. It diverts effort away from implementing features. As a result, you must carefully decide the sequence in which you extract services. You need to focus on extracting services that give the largest benefit. What’s more, you want to continually demonstrate to the business that there’s value in migrating to a microservice architecture.

在任何旅程中,了解您要去哪里都至关重要。开始迁移到微服务的一个好方法是使用 timeboxed 架构定义工作。你应该花一小段时间,比如几周,集思广益,想出你的理想 架构和定义一组服务。这为您提供了一个目标目的地。不过,记住这一点很重要 这种架构不是一成不变的。当您分解整体并获得经验时,您应该修改架构 考虑您学到的东西。

On any journey, it’s essential to know where you’re going. A good way to start the migration to microservices is with a time-boxed architecture definition effort. You should spend a short amount of time, such as a couple of weeks, brainstorming your ideal architecture and defining a set of services. This gives you a destination to aim for. It’s important, though, to remember that this architecture isn’t set in stone. As you break apart the monolith and gain experience, you should revise the architecture to take into account what you’ve learned.

确定大致目的地后,下一步就是开始分解 Monolith。有一对 的不同策略,您可以使用它来确定提取服务的顺序。

Once you’ve determined the approximate destination, the next step is to start breaking apart the monolith. There are a couple of different strategies you can use to determine the sequence in which you extract services.

一种策略是有效地冻结整体式应用的开发并按需提取服务。而不是实施 功能或修复 Bug 时,您可以提取必要的服务或服务并更改它们。一个好处 这种方法是它迫使你分解 Monolith。一个缺点是服务的提取是由 短期需求而不是长期需求。例如,它要求您提取服务,即使您正在制作 对系统相对稳定的部分进行的小改动。因此,您冒着做大量工作而获得微乎其微的好处的风险。

One strategy is to effectively freeze development of the monolith and extract services on demand. Instead of implementing features or fixing bugs in the monolith, you extract the necessary service or service(s) and change those. One benefit of this approach is that it forces you to break up the monolith. One drawback is that the extraction of services is driven by short-term requirements rather than long-term needs. For instance, it requires you to extract services even if you’re making a small change to a relatively stable part of the system. As a result, you risk doing a lot of work for minimal benefit.

另一种策略是一种更有计划的方法,其中您根据预期的好处对应用程序的模块进行排名 从提取它们中获得。提取服务的好处有几个:

An alternative strategy is a more planned approach, where you rank the modules of an application by the benefit you anticipate getting from extracting them. There are a few reasons why extracting a service is beneficial:

  • 加速开发如果您的应用程序的路线图表明应用程序的某个特定部分将在 然后将其转换为服务可以加快开发速度。
  • Accelerates developmentIf your application’s roadmap suggests that a particular part of your application will undergo a lot of development over the next year, then converting it to a service accelerates development.
  • 解决性能、扩展或可靠性问题 - 如果应用程序的特定部分存在性能或可伸缩性问题或不可靠,那么 将其转换为 Service。
  • Solves a performance, scaling, or reliability problemIf a particular part of your application has a performance or scalability problem or is unreliable, then it’s valuable to convert it to a service.
  • 启用对某些其他服务的提取 - 有时,由于模块之间的依赖关系,提取一个服务会简化另一个服务的提取。
  • Enables the extraction of some other servicesSometimes extracting one service simplifies the extraction of another service, due to dependencies between modules.

您可以使用这些条件将重构任务添加到应用程序的积压工作中,并按预期收益排序。优势 这种方法更具战略性,并且与业务需求更加紧密地保持一致。在 sprint 规划期间, 您可以决定实施功能还是提取服务更有价值。

You can use these criteria to add refactoring tasks to your application’s backlog, ranked by expected benefit. The benefit of this approach is that it’s more strategic and much more closely aligned with the needs of the business. During sprint planning, you decide whether it’s more valuable to implement features or extract services.

13.3. 设计服务和 Monolith 的协作方式

13.3. Designing how the service and the monolith collaborate

服务很少是独立的。它通常需要与 Monolith 协作。有时服务需要访问数据 拥有或调用其操作。例如,第 13.4.1 节中详细描述了,需要访问整体的订单和客户联系信息。整体式应用程序可能还需要访问 服务或调用其操作。例如,在后面的 13.5 节中,在讨论如何将交付管理提取到服务中时,我将介绍整体式应用需要如何调用 .Delayed Delivery ServiceDelivery Service

A service is rarely standalone. It usually needs to collaborate with the monolith. Sometimes a service needs to access data owned by the monolith or invoke its operations. For example, Delayed Delivery Service, described in detail in section 13.4.1, requires access to the monolith’s orders and customer contact info. The monolith might also need to access data owned by the service or invoke its operations. For example, later in section 13.5, when discussing how to extract delivery management into a service, I describe how the monolith needs to invoke Delivery Service.

一个重要的问题是维护服务和整体式应用之间的数据一致性。特别是,当您提取 service 中,您总是会拆分最初的 ACID 事务。您必须小心确保 数据一致性仍会保持。如本节后面所述,有时您使用 saga 来维护数据一致性。

One important concern is maintaining data consistency between the service and monolith. In particular, when you extract a service from the monolith, you invariably split what were originally ACID transactions. You must be careful to ensure that data consistency is still maintained. As described later in this section, sometimes you use sagas to maintain data consistency.

如前所述,服务和整体式应用之间的交互是通过集成胶水代码实现的。图 13.8 显示了集成胶水的结构。它由服务中的适配器和使用一些 一种 IPC 机制。根据要求,服务和整体式应用程序可能会通过 REST 进行交互,或者它们可能会使用 消息。他们甚至可能使用多种 IPC 机制进行通信。

The interaction between a service and the monolith is, as described earlier, facilitated by integration glue code. Figure 13.8 shows the structure of the integration glue. It consists of adapters in the service and monolith that communicate using some kind of IPC mechanism. Depending on the requirements, the service and monolith might interact over REST or they might use messaging. They might even communicate using multiple IPC mechanisms.

图 13.8.将 Mono Slim 迁移到微服务时,服务和 MonoLith 通常需要访问彼此的数据。此交互 由集成胶水提供便利,该胶水由实现 API 的适配器组成。某些 API 是基于消息传递的。其他 API 基于 RPI。

例如,同时使用 REST 和域事件。它使用 REST 从整体式架构中检索客户联系信息。它通过订阅整体式应用发布的域事件来跟踪 的状态。Delayed Delivery ServiceOrders

For example, Delayed Delivery Service uses both REST and domain events. It retrieves customer contact info from the monolith using REST. It tracks the state of Orders by subscribing to domain events published by the monolith.

在本节中,我首先描述了集成胶水的设计。我谈论它解决的问题和不同的 实现选项。之后,我将介绍事务管理策略,包括 sagas 的使用。我讨论如何 有时,维护数据一致性的要求会更改提取服务的顺序。

In this section, I first describe the design of the integration glue. I talk about the problems it solves and the different implementation options. After that I describe transaction management strategies, including the use of sagas. I discuss how sometimes the requirement to maintain data consistency changes the order in which you extract services.

我们先来看看 integration glue 的设计。

Let’s first look at the design of the integration glue.

13.3.1. 设计集成胶水

13.3.1. Designing the integration glue

在将功能实现为服务或从整体式架构中提取服务时,您必须开发集成胶水 这使服务能够与整体式应用协作。它由服务和 Monolith 中的代码组成,这些代码使用了一些 一种 IPC 机制。集成胶水的结构取决于所使用的 IPC 机制的类型。例如,如果 该服务使用 REST 调用整体式应用程序,然后集成胶水由服务中的 REST 客户端和 Web 控制器组成 在整体中。或者,如果整体式应用程序订阅服务发布的域事件,则集成 Glue 由服务中的事件发布适配器和整体中的事件处理程序组成。

When implementing a feature as a service or extracting a service from the monolith, you must develop the integration glue that enables a service to collaborate with the monolith. It consists of code in both the service and monolith that uses some kind of IPC mechanism. The structure of the integration glue depends on the type of IPC mechanism that is used. If, for example, the service invokes the monolith using REST, then the integration glue consists of a REST client in the service and web controllers in the monolith. Alternatively, if the monolith subscribes to domain events published by the service, then the integration glue consists of an event-publishing adapter in the service and event handlers in the monolith.

设计集成胶水 API

设计集成胶水的第一步是确定它为域逻辑提供哪些 API。有一对 可供选择的不同样式的界面,具体取决于您是查询数据还是更新数据。假设您是 正在处理 ,它需要从整体式应用程序中检索客户联系信息。服务的业务逻辑不需要知道 IPC 集成胶水用于检索信息的机制。因此,该机制应封装为 一个接口。由于 is 查询数据,因此定义 :Delayed Delivery ServiceDelayed Delivery ServiceCustomerContactInfoRepository

The first step in designing the integration glue is to decide what APIs it provides to the domain logic. There are a couple of different styles of interface to choose from, depending on whether you’re querying data or updating data. Let’s say you’re working on Delayed Delivery Service, which needs to retrieve customer contact info from the monolith. The service’s business logic doesn’t need to know the IPC mechanism that the integration glue uses to retrieve the information. Therefore, that mechanism should be encapsulated by an interface. Because Delayed Delivery Service is querying data, it makes sense to define a CustomerContactInfoRepository:

interface CustomerContactInfoRepository {
  CustomerContactInfo findCustomerContactInfo(long customerId)
}
interface CustomerContactInfoRepository {
  CustomerContactInfo findCustomerContactInfo(long customerId)
}

该服务的业务逻辑可以调用此 API,而无需知道集成 glue 如何检索数据。

The service’s business logic can invoke this API without knowing how the integration glue retrieves the data.

让我们考虑一个不同的服务。假设您正在从 FTGO 整体中提取交付管理。巨石 需要调用以计划、重新计划和取消投放。同样,底层 IPC 机制的细节并不重要 添加到业务逻辑中,并且应该由接口封装。在这种情况下,整体式应用必须调用服务操作 因此,使用存储库没有意义。更好的方法是定义服务接口,如下所示:Delivery Service

Let’s consider a different service. Imagine that you’re extracting delivery management from the FTGO monolith. The monolith needs to invoke Delivery Service to schedule, reschedule, and cancel deliveries. Once again, the details of the underlying IPC mechanism aren’t important to the business logic and should be encapsulated by an interface. In this scenario, the monolith must invoke a service operation, so using a repository doesn’t make sense. A better approach is to define a service interface, such as the following:

interface DeliveryService {
  void scheduleDelivery(...);
  void rescheduleDelivery(...);
  void cancelDelivery(...);
}
interface DeliveryService {
  void scheduleDelivery(...);
  void rescheduleDelivery(...);
  void cancelDelivery(...);
}

整体式应用的业务逻辑调用此 API,但不知道集成胶水是如何实现的。

The monolith’s business logic invokes this API without knowing how it’s implemented by the integration glue.

现在我们已经了解了界面设计,让我们看看交互样式和 IPC 机制。

Now that we’ve seen interface design, let’s look at interaction styles and IPC mechanisms.

选择交互样式和 IPC 机制

在设计集成胶水时,您必须做出的一个重要设计决策是选择交互样式和 IPC 使服务和 Monolith 能够协作的机制。如第 3 章所述,有几种交互样式和 IPC 机制可供选择。您应该使用哪一个取决于一方(服务或整体)需要什么来查询或更新另一方。

An important design decision you must make when designing the integration glue is selecting the interaction styles and IPC mechanisms that enable the service and the monolith to collaborate. As described in chapter 3, there are several interaction styles and IPC mechanisms to choose from. Which one you should use depends on what one party—the service or monolith—needs in order to query or update the other party.

如果一方需要查询另一方拥有的数据,则有多种选择。如图 13.9 所示,一种选择是实现 repository 接口的适配器调用数据提供者的 API。此 API 通常会 使用请求/响应交互样式,例如 REST 或 gRPC。例如,可能会通过调用 FTGO 整体实现的 REST API 来检索客户联系信息。Delayed Delivery Service

If one party needs to query data owned by the other party, there are several options. One option is, as figure 13.9 shows, for the adapter that implements the repository interface to invoke an API of the data provider. This API will typically use a request/response interaction style, such as REST or gRPC. For example, Delayed Delivery Service might retrieve the customer contact info by invoking a REST API implemented by the FTGO monolith.

图 13.9.实现该接口的适配器调用整体式架构的 REST API 来检索客户信息。CustomerContactInfoRepository

在此示例中,的 域逻辑通过调用接口来检索客户联系信息。此接口的实现调用整体式应用的 REST API。Delayed Delivery ServiceCustomerContactInfoRepository

In this example, the Delayed Delivery Service’s domain logic retrieves the customer contact info by invoking the CustomerContactInfoRepository interface. The implementation of this interface invokes the monolith’s REST API.

通过调用查询 API 查询数据的一个重要好处是它的简单性。主要缺点是它可能是 低 效。使用者可能需要发出大量请求。提供程序可能会返回大量数据。另一个 缺点是它降低了可用性,因为它是同步 IPC。因此,使用查询可能不切实际 应用程序接口。

An important benefit of querying data by invoking a query API is its simplicity. The main drawback is that it’s potentially inefficient. A consumer might need to make a large number of requests. A provider might return a large amount of data. Another drawback is that it reduces availability because it’s synchronous IPC. As a result, it might not be practical to use a query API.

另一种方法是让数据使用者维护数据的副本,如图 13.10 所示。副本实质上是一个 CQRS 视图。数据使用者通过订阅已发布的域事件来使副本保持最新状态 由数据提供商提供。

An alternative approach is for the data consumer to maintain a replica of the data, as shown in figure 13.10. The replica is essentially a CQRS view. The data consumer keeps the replica up-to-date by subscribing to domain events published by the data provider.

图 13.10.集成胶水将数据从 Mono 复制到服务。整体式应用程序发布域事件和事件 处理程序更新服务的数据库。

使用副本有几个好处。它避免了重复查询数据提供程序的开销。相反,正如所讨论的 在第 7 章中描述 CQRS 时,您可以设计副本以支持高效查询。但是,使用副本的一个缺点是维护的复杂性 它。如本节后面所述,一个潜在的挑战是需要修改整体式架构以发布域事件。

Using a replica has several benefits. It avoids the overhead of repeatedly querying the data provider. Instead, as discussed when describing CQRS in chapter 7, you can design the replica to support efficient queries. One drawback of using a replica, though, is the complexity of maintaining it. A potential challenge, as described later in this section, is the need to modify the monolith to publish domain events.

现在我们已经讨论了如何执行查询,让我们考虑如何执行更新。执行更新的一个挑战是 需要在整个服务和整体上保持数据一致性。发出更新请求的一方(请求者)具有 已更新或需要更新其数据库。因此,必须同时进行两次更新。解决方案适用于服务和整体式 使用框架(如 Eventuate Tram)实施的事务型消息传递进行通信。在简单方案中, 请求者可以发送通知消息或发布事件以触发更新。在更复杂的场景中,请求者 必须使用 saga 来保持数据一致性。Section 13.3.2 讨论了使用 sagas 的含义。

Now that we’ve discussed how to do queries, let’s consider how to do updates. One challenge with performing updates is the need to maintain data consistency across the service and monolith. The party making the update request (the requestor) has updated or needs to update its database. So it’s essential that both updates happen. The solution is for the service and monolith to communicate using transactional messaging implemented by a framework, such as Eventuate Tram. In simple scenarios, the requestor can send a notification message or publish an event to trigger an update. In more complex scenarios, the requestor must use a saga to maintain data consistency. Section 13.3.2 discusses the implications of using sagas.

实施防腐层

假设您正在将一项新功能作为一项全新的服务实现。您不受整体式架构代码库的约束,因此您 可以使用现代开发技术(如 DDD)并开发原始的新域模型。此外,因为 FTGO 单体的域定义不明确,而且有点 过时时,您可能会以不同的方式对概念进行建模。因此,服务的域模型将具有不同的类 名称、字段名称和字段值。例如,有一个职责范围狭窄的实体,而 FTGO 整体的实体职责过多。由于这两个域模型不同,因此必须实现 DDD 调用防损坏层 (ACL) 以便服务与整体式通信。Delayed Delivery ServiceDeliveryOrder

Imagine you’re implementing a new feature as a brand new service. You’re not constrained by the monolith’s code base, so you can use modern development techniques such as DDD and develop a pristine new domain model. Also, because the FTGO monolith’s domain is poorly defined and somewhat out-of-date, you’ll probably model concepts differently. As a result, your service’s domain model will have different class names, field names, and field values. For example, Delayed Delivery Service has a Delivery entity with narrowly focused responsibilities, whereas the FTGO monolith has an Order entity with an excessive number of responsibilities. Because the two domain models are different, you must implement what DDD calls an anti-corruption layer (ACL) in order for the service to communicate with the monolith.

图案:防腐层

一个软件层,在两个不同的领域模型之间进行转换,以防止一个模型的概念污染 另一个。请参阅 https://microservices.io/patterns/refactoring/anti-corruption-layer.html

A software layer that translates between two different domain models in order to prevent concepts from one model polluting another. See https://microservices.io/patterns/refactoring/anti-corruption-layer.html.

ACL 的目标是防止传统整体式应用的域模型污染服务的域模型。它是一层 在不同域模型之间转换的代码。例如,如图 13.11 所示,有一个接口,它定义了一个返回 .实现接口的类必须在 FTGO 单体的通用语言和通用语言之间进行转换。Delayed Delivery ServiceCustomerContactInfoRepositoryfindCustomerContactInfo()CustomerContactInfoCustomerContactInfoRepositoryDelayed Delivery Service

The goal of an ACL is to prevent a legacy monolith’s domain model from polluting a service’s domain model. It’s a layer of code that translates between the different domain models. For example, as figure 13.11 shows, Delayed Delivery Service has a CustomerContactInfoRepository interface, which defines a findCustomerContactInfo() method that returns CustomerContactInfo. The class that implements the CustomerContactInfoRepository interface must translate between the ubiquitous language of Delayed Delivery Service and that of the FTGO monolith.

图 13.11.调用整体式应用的服务适配器必须在服务的域模型和整体式应用的域模型之间进行转换。

的实现调用 FTGO 整体式架构来检索客户信息,并将响应转换为 。在此示例中,转换非常简单,但在其他情况下,它可能非常复杂,例如: 映射值,例如状态代码。findCustomerContactInfo()CustomerContactInfo

The implementation of findCustomerContactInfo() invokes the FTGO monolith to retrieve the customer information and translates the response to CustomerContactInfo. In this example, the translation is quite simple, but in other scenarios it could be quite complex and involve, for example, mapping values such as status codes.

使用域事件的事件订阅者也具有 ACL。域事件是发布者域模型的一部分。 事件处理程序必须将域事件转换为订阅者的域模型。例如,如图 13.12 所示,FTGO 单体应用发布域事件。 具有订阅这些事件的事件处理程序。OrderDelivery Service

An event subscriber, which consumes domain events, also has an ACL. Domain events are part of the publisher’s domain model. An event handler must translate domain events to the subscriber’s domain model. For example, as figure 13.12 shows, the FTGO monolith publishes Order domain events. Delivery Service has an event handler that subscribes to those events.

图 13.12.事件处理程序必须从事件发布者的域模型转换为订阅者的域模型。

事件处理程序必须将域事件从整体式应用的域语言转换为 .它可能需要映射类和属性名称,以及潜在的属性值。Delivery Service

The event handler must translate domain events from the monolith’s domain language to that of Delivery Service. It might need to map class and attribute names and potentially attribute values.

使用防腐败层的不仅仅是服务。整体式应用程序在调用服务和订阅时也会使用 ACL 到服务发布的域事件。例如,FTGO 整体式应用通过发送通知消息来安排投放 自。它通过在接口上调用方法来发送通知。implementation 类将其参数转换为可理解的消息。Delivery ServiceDeliveryServiceDelivery Service

It’s not just services that use an anti-corruption layer. A monolith also uses an ACL when invoking the service and when subscribing to domain events published by a service. For example, the FTGO monolith schedules a delivery by sending a notification message to Delivery Service. It sends the notification by invoking a method on the DeliveryService interface. The implementation class translates its parameters into a message that Delivery Service understands.

整体式应用程序如何发布和订阅域事件

域事件是一种重要的协作机制。新开发的服务可以直接发布和 使用事件。它可以使用第 3 章中描述的机制之一,例如 Eventuate Tram 框架。服务甚至可以使用事件溯源发布事件,如第 6 章所述。但是,将整体更改为发布和使用事件可能具有挑战性。让我们看看为什么。

Domain events are an important collaboration mechanism. It’s straightforward for a newly developed service to publish and consume events. It can use one of the mechanisms described in chapter 3, such as the Eventuate Tram framework. A service might even publish events using event sourcing, described in chapter 6. It’s potentially challenging, though, to change the monolith to publish and consume events. Let’s look at why.

整体式应用可以通过几种不同的方式发布域事件。一种方法是使用相同的域事件 服务使用的发布机制。您可以在代码中找到更改特定实体并插入对事件发布 API 的调用的所有位置。 这种方法的问题在于,更改 Mono Lith 并不总是那么容易。它可能很耗时且容易出错 找到所有位置并插入调用以发布事件。更糟糕的是,Monolith 的一些业务逻辑可能会 由无法轻松发布域事件的存储过程组成。

There are a couple of different ways that a monolith can publish domain events. One approach is to use the same domain event publishing mechanism used by the services. You find all the places in the code that change a particular entity and insert a call to an event publishing API. The problem with this approach is that changing a monolith isn’t always easy. It might be time consuming and error prone to locate all the places and insert calls to publish events. To make matters worse, some of the monolith’s business logic might consist of stored procedures that can’t easily publish domain events.

另一种方法是在数据库级别发布域事件。例如,您可以使用任一事务逻辑拖尾 或轮询,如第 3 章所述。使用事务拖尾的一个主要好处是,您不必更改整体式架构。发布的主要缺点 事件是通常很难确定更新的原因并发布适当的 高级商务活动。因此,该服务通常会发布表示表更改的事件,而不是 业务实体。

Another approach is to publish domain events at the database level. You can, for example, use either transaction logic tailing or polling, described in chapter 3. A key benefit of using transaction tailing is that you don’t have to change the monolith. The main drawback of publishing events at the database level is that it’s often difficult to identify the reason for the update and publish the appropriate high-level business event. As a result, the service will typically publish events representing changes to tables rather than business entities.

幸运的是,整体式应用程序通常更容易订阅作为服务发布的域事件。很多时候,您可以 使用框架编写事件处理程序,例如 Eventuate Tram。但有时,对于整体式应用程序来说,订阅甚至具有挑战性 到事件。例如,整体式应用可能是用没有消息代理客户端的语言编写的。在那种情况下, 您需要编写一个小的 “helper” 应用程序,用于订阅事件并直接更新整体式架构的数据库。

Fortunately, it’s usually easier for the monolith to subscribe to domain events published as services. Quite often, you can write event handlers using a framework, such as Eventuate Tram. But sometimes it’s even challenging for the monolith to subscribe to events. For example, the monolith might be written in a language that doesn’t have a message broker client. In that situation, you need to write a small “helper” application that subscribes to events and updates the monolith’s database directly.

现在我们已经了解了如何设计使服务和整体式应用能够协作的集成胶水,让我们 看看迁移到微服务时可能面临的另一个挑战:保持整个服务的数据一致性,以及 一个巨石。

Now that we’ve looked at how to design the integration glue that enables a service and the monolith to collaborate, let’s look at another challenge you might face when migrating to microservices: maintaining data consistency across a service and a monolith.

13.3.2. 维护服务和单体式应用之间的数据一致性

13.3.2. Maintaining data consistency across a service and a monolith

在开发服务时,您可能会发现在服务和整体式架构中保持数据一致性具有挑战性。 服务操作可能需要更新整体式应用中的数据,或者整体式应用操作可能需要更新服务中的数据。 例如,假设您从 Monolith 中提取。您需要更改整体式应用的订单管理操作(如 和 )以使用 saga,以保持与 .Kitchen ServicecreateOrder()cancelOrder()TicketOrder

When you develop a service, you might find it challenging to maintain data consistency across the service and the monolith. A service operation might need to update data in the monolith, or a monolith operation might need to update data in the service. For example, imagine you extracted Kitchen Service from the monolith. You would need to change the monolith’s order-management operations, such as createOrder() and cancelOrder(), to use sagas in order to keep the Ticket consistent with the Order.

然而,使用 saga 的问题在于 Monolith 可能不是一个愿意参与的人。如第 4 章所述,saga 必须使用补偿事务来撤消更改。,例如,包括一个补偿事务,如果 A 被 拒绝,则该事务将 标记为 rejected。在整体式架构中补偿事务的问题在于,您可能需要进行大量耗时的更改 到单体式架构中,以便支持它们。该 Monolith 可能还需要实施对策来处理 传纪之间的隔离。这些代码更改的成本可能是提取服务的巨大障碍。Create Order SagaOrderKitchen Service

The problem with using sagas, however, is that the monolith might not be a willing participant. As described in chapter 4, sagas must use compensating transactions to undo changes. Create Order Saga, for example, includes a compensating transaction that marks an Order as rejected if it’s rejected by Kitchen Service. The problem with compensating transactions in the monolith is that you might need to make numerous and time-consuming changes to the monolith in order to support them. The monolith might also need to implement countermeasures to handle the lack of isolation between sagas. The cost of these code changes can be a huge obstacle to extracting a service.

关键 saga 术语

我在第 4 章中介绍传纪。以下是一些关键术语:

I cover sagas in chapter 4. Here are some key terms:

  • 佐贺通过异步消息传送协调的一系列本地事务。
  • SagaA sequence of local transactions coordinated through asynchronous messaging.
  • 补偿交易撤消本地事务所做的更新的事务。
  • Compensating transactionA transaction that undoes the updates made by a local transaction.
  • 对策一种用来处理传纪之间缺乏隔离的问题的设计技术。
  • CountermeasureA design technique used to handle the lack of isolation between sagas.
  • 语义锁在 saga 更新的记录中设置标志的对策。
  • Semantic lockA countermeasure that sets a flag in a record that is being updated by a saga.
  • 可补偿交易需要补偿事务的事务,因为 saga 中紧随其后的事务之一可能会失败。
  • Compensatable transactionA transaction that needs a compensating transaction because one of the transactions that follows it in the saga can fail.
  • Pivot 交易作为 saga 的 go/no-go 点的事务。如果成功,则 saga 将运行完成。
  • Pivot transactionA transaction that is the saga’s go/no-go point. If it succeeds, then the saga will run to completion.
  • Retritable transaction(可重试事务)— 紧跟在 pivot 事务之后并保证成功的事务。
  • Retriable transactionA transaction that follows the pivot transaction and is guaranteed to succeed.

幸运的是,许多 Saga 都很容易实现。如第 4 章所述,如果单体式应用的事务是 pivot 事务可重试事务,那么实现 saga 应该很简单。您甚至可以通过仔细订购来简化实施 服务提取的顺序,以便 Monolith 的事务永远不需要补偿。或者可能是相对的 难以更改 mono 架构以支持补偿事务。了解为何实现补偿事务 在 Monolith 中有时很有挑战性,我们来看一些例子,从一个特别麻烦的例子开始。

Fortunately, many sagas are straightforward to implement. As covered in chapter 4, if the monolith’s transactions are either pivot transactions or retriable transactions, then implementing sagas should be straightforward. You may even be able to simplify implementation by carefully ordering the sequence of service extractions so that the monolith’s transactions never need to be compensatable. Or it may be relatively difficult to change the monolith to support compensating transactions. To understand why implementing compensating transactions in the monolith is sometimes challenging, let’s look at some examples, beginning with a particularly troublesome one.

更改整体式架构以支持可补偿事务的挑战

让我们深入研究一下从整体式架构中提取时需要解决的事务补偿问题。此重构涉及拆分实体并在 中创建实体。它会影响整体式应用实现的许多命令,包括 .Kitchen ServiceOrderTicketKitchen ServicecreateOrder()

Let’s dig into the problem of compensating transactions that you’ll need to solve when extracting Kitchen Service from the monolith. This refactoring involves splitting the Order entity and creating a Ticket entity in Kitchen Service. It impacts numerous commands implemented by the monolith, including createOrder().

整体式应用程序将命令实现为单个 ACID 事务,包括以下步骤:createOrder()

The monolith implements the createOrder() command as a single ACID transaction consisting of the following steps:

  1. 验证订单详细信息。
  2. Validate order details.
  3. 验证消费者是否可以下订单。
  4. Verify that the consumer can place an order.
  5. 授权消费者的信用卡。
  6. Authorize consumer’s credit card.
  7. 创建一个 .Order
  8. Create an Order.

您需要将此 ACID 事务替换为包含以下步骤的 saga:

You need to replace this ACID transaction with a saga consisting of the following steps:

  1. 在 Monolith 中

    • 在 state 中创建一个。OrderAPPROVAL_PENDING
    • 验证消费者是否可以下订单。
  2. In the monolith

    • Create an Order in an APPROVAL_PENDING state.
    • Verify that the consumer can place an order.
  3. Kitchen Service

    • 验证订单详细信息。
    • 在 state 中创建 a 。TicketCREATE_PENDING
  4. In the Kitchen Service

    • Validate order details.
    • Create a Ticket in the CREATE_PENDING state.
  5. 在 Monolith 中

    • 授权消费者的信用卡。
    • 将 的状态更改为 。OrderAPPROVED
  6. In the monolith

    • Authorize consumer’s credit card.
    • Change state of Order to APPROVED.
  7. Kitchen Service

    • 将 的状态更改为 。TicketAWAITING_ACCEPTANCE
  8. In Kitchen Service

    • Change the state of the Ticket to AWAITING_ACCEPTANCE.

这个 saga 类似于第 4 章中描述的。它由四个本地事务组成,两个在整体式网络中,两个在 .第一个事务在 state.第二个事务在 state 中创建一个。第三个事务授权信用卡并将订单状态更改为 。第四个也是最后一个事务将 的状态更改为 。CreateOrderSagaKitchen ServiceOrderAPPROVAL_PENDINGTicketCREATE_PENDINGConsumerAPPROVEDTicketAWAITING_ACCEPTANCE

This saga is similar to CreateOrderSaga described in chapter 4. It consists of four local transactions, two in the monolith and two in Kitchen Service. The first transaction creates an Order in the APPROVAL_PENDING state. The second transaction creates a Ticket in the CREATE_PENDING state. The third transaction authorizes the Consumer credit card and changes the state of the order to APPROVED. The fourth and final transaction changes the state of the Ticket to AWAITING_ACCEPTANCE.

实现此 saga 的挑战在于,创建 的第一步 必须是可补偿的。这是因为 中发生的第二个本地事务可能会失败,并需要整体式应用撤消第一个本地事务执行的更新。因此,实体需要具有第 4 章中描述的语义锁定对策,该对策指示 正在创建过程中。OrderKitchen ServiceOrderAPPROVAL_PENDINGOrder

The challenge with implementing this saga is that the first step, which creates the Order, must be compensatable. That’s because the second local transaction, which occurs in Kitchen Service, might fail and require the monolith to undo the updates performed by the first local transaction. As a result, the Order entity needs to have an APPROVAL_PENDING, a semantic lock countermeasure, described in chapter 4, that indicates an Order is in the process of being created.

引入新实体状态的问题在于,它可能需要对整体式应用进行广泛的更改。您可能需要更改 触及实体的代码。对 Mono Lith 进行这些广泛的更改非常耗时,而且不是最好的开发投资 资源。它也存在潜在风险,因为 Monolith 通常难以测试。OrderOrder

The problem with introducing a new Order entity state is that it potentially requires widespread changes to the monolith. You might need to change every place in the code that touches an Order entity. Making these kinds of widespread changes to the monolith is time consuming and not the best investment of development resources. It’s also potentially risky, because the monolith is often difficult to test.

Saga 并不总是需要 Mono Lith 来支持可补偿事务

Sagas 是高度特定于域的。有些应用(例如我们刚才看到的那个)需要 mono 架构来支持补偿事务。 但是,当您提取服务时,您很可能能够设计不需要整体式 实现补偿交易。这是因为 Monolith 只需要支持补偿事务,如果事务 遵循 Monolith 的事务可能会失败。如果 Monolith 的每个事务都是 pivot 事务或 retriable transaction,则 monolith 永远不需要执行补偿事务。因此,您只需要 对 Monolith 进行最少的更改以支持 Sagas。

Sagas are highly domain-specific. Some, such as the one we just looked at, require the monolith to support compensating transactions. But it’s quite possible that when you extract a service, you may be able to design sagas that don’t require the monolith to implement compensating transactions. That’s because a monolith only needs to support compensating transactions if the transactions that follow the monolith’s transaction can fail. If each of the monolith’s transactions is either a pivot transaction or a retriable transaction, then the monolith never needs to execute a compensating transaction. As a result, you only need to make minimal changes to the monolith to support sagas.

例如,假设 提取 不是 提取 ,而是提取 。此重构涉及拆分实体并在 中创建精简的实体。它还会影响许多命令,包括 ,该命令已从整体式应用移动到 .要提取 ,您需要使用以下步骤更改命令以使用 saga:Kitchen ServiceOrder ServiceOrderOrderOrder ServicecreateOrder()Order ServiceOrder ServicecreateOrder()

For example, imagine that instead of extracting Kitchen Service, you extract Order Service. This refactoring involves splitting the Order entity and creating a slimmed-down Order entity in Order Service. It also impacts numerous commands, including createOrder(), which is moved from the monolith to Order Service. In order to extract Order Service, you need to change the createOrder() command to use a saga, using the following steps:

  1. Order Service

    • 在 state 中创建一个。OrderAPPROVAL_PENDING
  2. Order Service

    • Create an Order in an APPROVAL_PENDING state.
  3. 整体

    • 验证消费者是否可以下订单。
    • 验证订单详细信息并创建一个 .Ticket
    • 授权消费者的信用卡。
  4. Monolith

    • Verify that the consumer can place an order.
    • Validate order details and create a Ticket.
    • Authorize consumer’s credit card.
  5. Order Service

    • 将 的状态更改为 。OrderAPPROVED
  6. Order Service

    • Change state of Order to APPROVED.

此 saga 由三个本地事务组成,一个在整体式应用中,两个在 .第一个事务(位于 in )在 state 中创建一个 in。第二个事务位于整体式应用中,验证使用者是否可以下订单,授权他们的信用 卡,并创建一个 .第三个事务(位于 中)将 的状态更改为 。Order ServiceOrder ServiceOrderAPPROVAL_PENDINGTicketOrder ServiceOrderAPPROVED

This saga consists of three local transactions, one in the monolith and two in Order Service. The first transaction, which is in Order Service, creates an Order in the APPROVAL_PENDING state. The second transaction, which is in the monolith, verifies that the consumer can place orders, authorizes their credit card, and creates a Ticket. The third transaction, which is in Order Service, changes the state of the Order to APPROVED.

整体式架构的事务是 saga 的枢轴事务,即 saga 的不归路。如果 Monolith 的事务 完成,则 saga 将一直运行直到完成。只有此 saga 的第一步和第二步会失败。第 3 笔交易 不会失败,因此 Monolith 中的第二个事务永远不需要回滚。因此,支持 Compensatable transactions 位于 中,这比整体式应用更具可测试性。Order Service

The monolith’s transaction is the saga’s pivot transaction—the point of no return for the saga. If the monolith’s transaction completes, then the saga will run until completion. Only the first and second steps of this saga can fail. The third transaction can’t fail, so the second transaction in the monolith never needs to be rolled back. As a result, all the complexity of supporting compensatable transactions is in Order Service, which is much more testable than the monolith.

如果您在提取服务时需要编写的所有 Sagas 都具有此结构,那么您需要进行的更改将大大减少 到巨石。此外,可以仔细地对服务的提取进行排序,以确保整体式 事务可以是透视事务或可重试事务。让我们看看如何做到这一点。

If all the sagas that you need to write when extracting a service have this structure, you’ll need to make far fewer changes to the monolith. What’s more, it’s possible to carefully sequence the extraction of services to ensure that the monolith’s transactions are either pivot transactions or retriable transactions. Let’s look at how to do that.

对服务的提取进行排序,以避免在整体式应用中实现竞争事务

正如我们刚才看到的,提取需要 mono 体式应用来实现补偿事务,而提取则不需要。这表明提取服务的顺序很重要。通过仔细订购提取服务, 您可以避免对 Monolith 进行广泛的修改以支持可补偿的事务。我们 可以确保整体式的事务是 pivot 事务或可重试事务。例如,如果我们首先 从 FTGO 整体中提取,然后提取,提取将很简单。让我们仔细看看如何做到这一点。Kitchen ServiceOrder ServiceOrder ServiceConsumer ServiceKitchen Service

As we just saw, extracting Kitchen Service requires the monolith to implement compensating transactions, whereas extracting Order Service doesn’t. This suggests that the order in which you extract services matters. By carefully ordering the extraction of services, you can potentially avoid having to make widespread modifications to the monolith to support compensatable transactions. We can ensure that the monolith’s transactions are either pivot transactions or retriable transactions. For example, if we first extract Order Service from the FTGO monolith and then extract Consumer Service, extracting Kitchen Service will be straightforward. Let’s take a closer look at how to do that.

提取 后,该命令使用以下 saga:Consumer ServicecreateOrder()

Once we have extracted Consumer Service, the createOrder() command uses the following saga:

  1. Order Service:在 state 中创建 an。OrderAPPROVAL_PENDING
  2. Order Service: create an Order in an APPROVAL_PENDING state.
  3. Consumer Service:验证消费者是否可以下单。
  4. Consumer Service: verify that the consumer can place an order.
  5. 整体

    • 验证订单详细信息并创建一个 .Ticket
    • 授权消费者的信用卡。
  6. Monolith

    • Validate order details and create a Ticket.
    • Authorize consumer’s credit card.
  7. Order Service:将 的状态更改为 。OrderAPPROVED
  8. Order Service: change state of Order to APPROVED.

在这个 saga 中,整体式架构的事务是 pivot 事务。 实现 Compensatable 事务。Order Service

In this saga, the monolith’s transaction is the pivot transaction. Order Service implements the compensatable transaction.

现在我们已经提取了 ,我们可以提取 。如果我们提取此服务,该命令将使用以下 saga:Consumer ServiceKitchen ServicecreateOrder()

Now that we’ve extracted Consumer Service, we can extract Kitchen Service. If we extract this service, the createOrder() command uses the following saga:

  1. Order Service:在 state 中创建 an。OrderAPPROVAL_PENDING
  2. Order Service: create an Order in an APPROVAL_PENDING state.
  3. Consumer Service:验证消费者是否可以下单。
  4. Consumer Service: verify that the consumer can place an order.
  5. Kitchen Service:验证订单详细信息并创建 PENDING 。Ticket
  6. Kitchen Service: validate order details and create a PENDING Ticket.
  7. Monolith:授权消费者的信用卡。
  8. Monolith: authorize consumer’s credit card.
  9. Kitchen Service:将 的状态更改为 。TicketAPPROVED
  10. Kitchen Service: change state of Ticket to APPROVED.
  11. Order Service:将 的状态更改为 。OrderAPPROVED
  12. Order Service: change state of Order to APPROVED.

在这个 saga 中,整体式架构的事务仍然是 pivot 事务。 并实施可补偿交易。Order ServiceKitchen Service

In this saga, the monolith’s transaction is still the pivot transaction. Order Service and Kitchen Service implement the compensatable transactions.

我们甚至可以通过提取 .如果我们提取此服务,该命令将使用以下 saga:Accounting ServicecreateOrder()

We can even continue to refactor the monolith by extracting Accounting Service. If we extract this service, the createOrder() command uses the following saga:

  1. Order Service:在 state 中创建 an。OrderAPPROVAL_PENDING
  2. Order Service: create an Order in an APPROVAL_PENDING state.
  3. Consumer Service:验证消费者是否可以下单。
  4. Consumer Service: verify that the consumer can place an order.
  5. Kitchen Service:验证订单详细信息并创建 PENDING 。Ticket
  6. Kitchen Service: validate order details and create a PENDING Ticket.
  7. Accounting Service:授权消费者的信用卡。
  8. Accounting Service: authorize consumer’s credit card.
  9. Kitchen Service:将 的状态更改为 。TicketAPPROVED
  10. Kitchen Service: change state of Ticket to APPROVED.
  11. Order Service:将 的状态更改为 。OrderAPPROVED
  12. Order Service: change state of Order to APPROVED.

如您所见,通过仔细对提取进行排序,您可以避免使用需要对 巨石。现在,让我们看看在迁移到微服务架构时如何处理安全性。

As you can see, by carefully sequencing the extractions, you can avoid using sagas that require making complex changes to the monolith. Let’s now look at how to handle security when migrating to a microservice architecture.

13.3.3. 处理身份验证和授权

13.3.3. Handling authentication and authorization

将整体式应用程序重构为微服务架构时需要解决的另一个设计问题是 Adapting Monolith 的安全机制来支持服务。第 11 章介绍了如何处理微服务架构中的安全性。基于微服务的应用程序使用令牌,例如 JSON Web 令牌 (JWT),用于传递用户身份。这与典型的传统整体式应用程序完全不同 它使用内存中的会话状态,并使用 Thread Local 传递用户身份。转型时的挑战 微服务架构的整体式应用程序是您需要同时支持整体式和基于 JWT 的安全性 机制。

Another design issue you need to tackle when refactoring a monolithic application to a microservice architecture is adapting the monolith’s security mechanism to support the services. Chapter 11 describes how to handle security in a microservice architecture. A microservices-based application uses tokens, such as JSON Web tokens (JWT), to pass around user identity. That’s quite different than a typical traditional, monolithic application that uses in-memory session state and passes around the user identity using a thread local. The challenge when transforming a monolithic application to a microservice architecture is that you need to support both the monolithic and JWT-based security mechanisms simultaneously.

幸运的是,有一种简单的方法可以解决这个问题,只需要您对整体的 登录请求处理程序。图 13.13 显示了它是如何工作的。登录处理程序返回一个额外的 Cookie,在本例中我将其称为 ,其中包含用户信息,例如用户 ID 和角色。浏览器在每个请求中都包含该 Cookie。The API Gateway 从 Cookie 中提取信息,并将其包含在它向服务发出的 HTTP 请求中。因此, 每个服务都可以访问所需的用户信息。USERINFO

Fortunately, there’s a straightforward way to solve this problem that only requires you to make one small change to the monolith’s login request handler. Figure 13.13 shows how this works. The login handler returns an additional cookie, which in this example I call USERINFO, that contains user information, such as the user ID and roles. The browser includes that cookie in every request. The API gateway extracts the information from the cookie and includes it in the HTTP requests that it makes to a service. As a result, each service has access to the needed user information.

图 13.13.登录处理程序得到了增强,可以设置 Cookie,该 Cookie 是包含用户信息的 JWT。 在调用服务时将 Cookie 传输到 Authorization 标头。USERINFOAPI GatewayUSERINFO

事件顺序如下:

The sequence of events is as follows:

  1. 客户端发出包含用户凭证的登录请求。
  2. The client makes a login request containing the user’s credentials.
  3. API Gateway将登录请求路由到 FTGO 单体。
  4. API Gateway routes the login request to the FTGO monolith.
  5. 整体式应用程序返回包含会话 Cookie 和 Cookie(其中包含用户信息,如 ID 和角色)的响应。JSESSIONIDUSERINFO
  6. The monolith returns a response containing the JSESSIONID session cookie and the USERINFO cookie, which contains the user information, such as ID and roles.
  7. 客户端发出请求(包括 Cookie)以调用操作。USERINFO
  8. The client makes a request, which includes the USERINFO cookie, in order to invoke an operation.
  9. API Gateway验证 Cookie 并将其包含在它向服务发出的请求的标头中。该服务验证令牌并提取用户信息。USERINFOAuthorizationUSERINFO
  10. API Gateway validates the USERINFO cookie and includes it in the Authorization header of the request that it makes to the service. The service validates the USERINFO token and extracts the user information.

让我们更详细地看一下。LoginHandlerAPI Gateway

Let’s look at LoginHandler and API Gateway in more detail.

整体的 LoginHandler 设置 USERINFO Cookie

LoginHandler处理用户的凭据。它对用户进行身份验证,并在会话中存储有关用户的信息。它经常被实施 通过安全框架,例如 Spring Security 或 Passport for NodeJS。如果应用程序配置为使用默认的内存中 session,则 HTTP 响应会设置一个会话 Cookie,例如 .为了支持迁移到微服务,还必须设置包含描述用户的 JWT 的 Cookie。POSTJSESSIONIDLoginHandlerUSERINFO

LoginHandler processes the POST of the user’s credentials. It authenticates the user and stores information about the user in the session. It’s often implemented by a security framework, such as Spring Security or Passport for NodeJS. If the application is configured to use the default in-memory session, the HTTP response sets a session cookie, such as JSESSIONID. In order to support the migration to microservices, LoginHandler must also set the USERINFO cookie containing the JWT that describes the user.

API 网关将 USERINFO Cookie 映射到 Authorization 标头

第 8 章所述,API 网关负责请求路由和 API 组合。它通过向整体式应用发出一个或多个请求来处理每个请求 和服务。当 API 网关调用服务时,它会验证 Cookie 并将其传递给 HTTP 请求标头中的服务。通过将 Cookie 映射到标头,API 网关可确保它以独立于 客户端类型。USERINFOAuthorizationAuthorization

The API gateway, as described in chapter 8, is responsible for request routing and API composition. It handles each request by making one or more requests to the monolith and the services. When the API gateway invokes a service, it validates the USERINFO cookie and passes it to the service in the HTTP request’s Authorization header. By mapping the cookie to the Authorization header, the API gateway ensures that it passes the user identity to the service in a standard way that’s independent of the type of client.

最终,我们很可能会将登录和用户管理提取到服务中。但正如你所看到的,只需将一个小的 更改为 Monolith 的 login 处理程序,服务现在可以访问用户信息。这使您能够集中注意力 开发为业务提供最大价值的服务并延迟提取价值较低的服务,例如 用户管理。

Eventually, we’ll most likely extract login and user management into services. But as you can see, by only making one small change to the monolith’s login handler, it’s now possible for services to access user information. This enables you focus on developing services that provide the greatest value to the business and delay extracting less valuable services, such as user management.

现在我们已经了解了在重构为微服务时如何处理安全性,让我们看看一个实现 新功能即服务。

Now that we’ve looked at how to handle security when refactoring to microservices, let’s see an example of implementing a new feature as a service.

13.4. 将新功能实现为服务:处理错误交付的订单

13.4. Implementing a new feature as a service: handling misdelivered orders

假设您的任务是改进 FTGO 处理错误交付订单的方式。越来越多的客户 抱怨客户服务如何处理未送达的订单。大多数订单都按时交付,但 有时订单要么延迟交货,要么根本不交货。例如,快递员因意外的坏事而延误 traffic 中,因此订单被提货和配送延迟。或者,当快递员到达餐厅时,已经 closed 的 Closed 的 URL 中,并且无法进行投放。更糟糕的是,客户服务第一次听说投递错误 是当他们收到来自不满意客户的愤怒电子邮件时。

Let’s say you’ve been tasked with improving how FTGO handles misdelivered orders. A growing number of customers have been complaining about how customer service handles orders not being delivered. The majority of orders are delivered on time, but from time to time orders are either delivered late or not at all. For example, the courier gets delayed by unexpectedly bad traffic, so the order is picked up and delivered late. Or perhaps by the time the courier arrives at the restaurant, it’s closed, and the delivery can’t be made. To make matters worse, the first time customer service hears about the misdelivery is when they receive an angry email from an unhappy customer.

真实故事:我丢失的冰淇淋

一个周六晚上,我感到懒惰,使用著名的送餐应用程序下了订单,让冰淇淋送出去 来自 Smitten。它从未出现过。公司唯一的通信是第二天早上的一封电子邮件,说我的订单有 已取消。我还收到了一位非常困惑的客户服务代理发来的语音邮件,她显然不知道她在打电话什么 大约。也许这个电话是由我的一条描述发生的事情的推文引发的。显然,这家快递公司还没有成立 任何妥善处理不可避免的错误的机制。

One Saturday night I was feeling lazy and placed an order using a well-known food delivery app to have ice cream delivered from Smitten. It never showed up. The only communication from the company was an email the next morning saying my order had been canceled. I also got a voicemail from a very confused customer service agent who clearly didn’t know what she was calling about. Perhaps the call was prompted by one of my tweets describing what happened. Clearly, the delivery company had not established any mechanisms for properly handling inevitable mistakes.

许多这些交付问题的根本原因是 FTGO 应用程序使用的原始交付调度算法。 更复杂的调度程序正在开发中,但几个月后才能完成。临时解决方案适用于 FTGO 通过向买家道歉,并在某些情况下提前提供赔偿来主动处理延迟或取消的订单 客户抱怨。

The root cause for many of these delivery problems is the primitive delivery scheduling algorithm used by the FTGO application. A more sophisticated scheduler is under development but won’t be finished for a few months. The interim solution is for FTGO to proactively handle delayed or canceled orders by apologizing to the customer, and in some cases offering compensation before the customer complains.

您的工作是实现一项新功能,该功能将执行以下操作:

Your job is to implement a new feature that will do the following:

  1. 当客户的订单无法按时送达时通知客户。
  2. Notify the customer when their order won’t be delivered on time.
  3. 当客户的订单因无法在餐厅关门前取货而无法送达时通知客户。
  4. Notify the customer when their order can’t be delivered because it can’t be picked up before the restaurant closes.
  5. 当订单无法按时送达时通知客户服务,以便他们可以通过赔偿来主动纠正这种情况 客户。
  6. Notify customer service when an order can’t be delivered on time so that they can proactively rectify the situation by compensating the customer.
  7. 跟踪投放统计信息。
  8. Track delivery statistics.

这个新功能相当简单。新代码必须跟踪每个 的状态,如果无法按承诺交付,则代码必须通知客户和客户支持,例如发送电子邮件。OrderOrder

This new feature is fairly simple. The new code must track the state of each Order, and if an Order can’t be delivered as promised, the code must notify the customer and customer support, by, for example, sending an email.

但是,您应该如何(或者更准确地说,在哪里)实现这项新功能呢?一种方法是在 Monolith 中实现一个新模块。问题是 开发和测试此代码将很困难。更重要的是,这种方法增加了单体式应用的大小,从而 让 Monolith Hell 变得更糟。记住前面的洞定律:当你在一个洞里时,最好停止挖掘。 与其让整体式应用变得更大,不如将这些新功能作为服务实现。

But how—or perhaps more precisely, where—should you implement this new feature? One approach is to implement a new module in the monolith. The problem there is that developing and testing this code will be difficult. What’s more, this approach increases the size of the monolith and thereby makes monolith hell even worse. Remember the Law of Holes from earlier: when you’re in a hole, it’s best to stop digging. Rather than make the monolith larger, a much better approach is to implement these new features as a service.

13.4.1. Delayed Delivery Service 的设计

13.4.1. The design of Delayed Delivery Service

我们将此功能作为名为 .图 13.14 显示了实现此服务后的 FTGO 应用程序的架构。该应用程序包括 FTGO 单体式 新的 、 和 . 具有一个 API,该 API 定义了一个名为 的查询操作,该操作返回当前延迟或无法送达的订单。 将请求路由到服务,将所有其他请求路由到整体式应用。集成胶水提供对整体式应用的数据的访问。Delayed Order ServiceDelayed Delivery ServiceAPI GatewayDelayed Delivery ServicegetDelayedOrders()API GatewaygetDelayedOrders()Delayed Order Service

We’ll implement this feature as a service called Delayed Order Service. Figure 13.14 shows the FTGO application’s architecture after implementing this service. The application consists of the FTGO monolith, the new Delayed Delivery Service, and an API Gateway. Delayed Delivery Service has an API that defines a single query operation called getDelayedOrders(), which returns the currently delayed or undeliverable orders. API Gateway routes the getDelayedOrders() request to the service and all other requests to the monolith. The integration glue provides Delayed Order Service with access to the monolith’s data.

图 13.14.的设计 .集成胶水提供对整体式应用拥有的数据(如 and 实体)和客户联系信息的访问。Delayed Delivery ServiceDelayed Delivery ServiceOrderRestaurant

的域模型由各种实体组成,包括 、 和 。核心 logic 由 class 实现。计时器会定期调用它,以查找无法按时送达的订单。它通过查询 和 来实现这一点。如果无法按时送达,则通知消费者和客户服务。Delayed Order ServiceDelayedOrderNotificationOrderRestaurantDelayedOrderServiceOrdersRestaurantsOrderDelayedOrderService

The Delayed Order Service’s domain model consists of various entities, including DelayedOrderNotification, Order, and Restaurant. The core logic is implemented by the DelayedOrderService class. It’s periodically invoked by a timer to find orders that won’t be delivered on time. It does that by querying Orders and Restaurants. If an Order can’t be delivered on time, DelayedOrderService notifies the consumer and customer service.

Delayed Order Service不拥有 AND 实体。相反,这些数据是从 FTGO 整体复制的。此外,该服务不存储客户联系人 信息,而是从整体式应用中检索它。让我们看看提供对整体式应用数据的访问的集成胶水的设计。OrderRestaurantDelayed Order Service

Delayed Order Service doesn’t own the Order and Restaurant entities. Instead, this data is replicated from the FTGO monolith. What’s more, the service doesn’t store the customer contact information, but instead retrieves it from the monolith. Let’s look at the design of the integration glue that provides Delayed Order Service access to the monolith’s data.

13.4.2. 为 Delayed Delivery Service 设计集成胶水

13.4.2. Designing the integration glue for Delayed Delivery Service

即使实现新功能的服务定义了自己的实体类,它通常也会访问拥有的数据 由巨石。 也不例外。它有一个实体,该实体表示它已发送给使用者的通知。但正如我刚才提到的,它的 and entities 从 FTGO 单体复制数据。它还需要查询用户的联系信息,以便通知用户。 因此,我们需要实现能够访问整体式应用数据的集成胶水。Delayed Delivery ServiceDelayedOrderNotificationOrderRestaurantDelivery Service

Even though a service that implements a new feature defines its own entity classes, it usually accesses data that’s owned by the monolith. Delayed Delivery Service is no exception. It has a DelayedOrderNotification entity, which represents a notification that it has sent to the consumer. But as I just mentioned, its Order and Restaurant entities replicate data from the FTGO monolith. It also needs to query user contact information in order to notify the user. Consequently, we need to implement integration glue that enables Delivery Service to access the monolith’s data.

图 13.15 显示了集成胶水的设计。FTGO 单体 Publish 和 domain 事件。 使用这些事件并更新这些实体的副本。FTGO 整体式应用实现了一个 REST 终端节点,用于查询客户联系信息。 当需要通知用户其订单无法按时送达时,调用此终端节点。OrderRestaurantDelivery ServiceDelivery Service

Figure 13.15 shows the design of the integration glue. The FTGO monolith publishes Order and Restaurant domain events. Delivery Service consumes these events and updates its replicas of those entities. The FTGO monolith implements a REST endpoint for querying the customer contact information. Delivery Service calls this endpoint when it needs to notify a user that their order cannot be delivered on time.

图 13.15.集成胶水提供对整体式应用拥有的数据的访问。Delayed Delivery Service

让我们看看集成每个部分的设计,从用于检索客户联系信息的 REST API 开始。

Let’s look at the design of each part of the integration, starting with the REST API for retrieving customer contact information.

使用 CustomerContactInfoRepository 查询客户联系信息

Section 13.3.1中所述,服务(例如)可以通过几种不同的方式读取 monolith 的数据。最简单的选项是使用整体式应用的查询 API 检索数据。在检索联系人信息时,此方法很有意义。没有任何延迟或性能问题,因为很少需要检索用户的联系信息,而且数据量非常小。Delayed Delivery ServiceDelayed Order ServiceUserDelayed Delivery Service

As described in section 13.3.1, there are a couple of different ways that a service such as Delayed Delivery Service could read the monolith’s data. The simplest option is for Delayed Order Service to retrieve data using the monolith’s query API. This approach makes sense when retrieving the User contact information. There aren’t any latency or performance, issues because Delayed Delivery Service rarely needs to retrieve a user’s contact information, and the amount of data is quite small.

CustomerContactInfoRepository是一个用于检索消费者联系信息的接口。它由 实现,它通过调用整体式架构的 REST 端点来检索用户信息。Delayed Delivery ServiceCustomerContactInfoProxygetCustomerContactInfo()

CustomerContactInfoRepository is an interface that enables Delayed Delivery Service to retrieve a consumer’s contact info. It’s implemented by a CustomerContactInfoProxy, which retrieves the user information by invoking the monolith’s getCustomerContactInfo() REST endpoint.

发布和使用 Order 和 Restaurant 域事件

不幸的是,查询 monolith 的所有 open 和 hours 的状态是不切实际的。这是因为通过网络重复传输大量数据效率低下。因此,必须使用第二个更复杂的选项,并通过订阅整体式应用发布的事件来维护 和 的副本。请务必记住,副本不是 来自整体式架构的数据 — 它只存储 AND 实体的一小部分属性。Delayed Delivery ServiceOrdersRestaurantDelayed Delivery ServiceOrdersRestaurantsOrderRestaurant

Unfortunately, it isn’t practical for Delayed Delivery Service to query the monolith for the state of all open Orders and Restaurant hours. That’s because it’s inefficient to repeatedly transfer a large amount of data over the network. Consequently, Delayed Delivery Service must use the second, more complex option and maintain a replica of Orders and Restaurants by subscribing to events published by the monolith. It’s important to remember that the replica isn’t a complete copy of the data from the monolith—it just stores a small subset of the attributes of Order and Restaurant entities.

如前面的 13.3.1 节所述,有几种不同的方法可以更改 FTGO 单体,以便它发布和域事件。一个选项是修改 Monolith 中更新的所有位置并发布高级域事件。第二个选项是跟踪事务日志以将更改复制为事件。 在此特定方案中,我们需要同步两个数据库。我们不需要 FTGO 单体式架构来发布高级 domain 事件,因此任何一种方法都可以。OrderRestaurantOrdersRestaurants

As described earlier in section 13.3.1, there are a couple of different ways that we can change the FTGO monolith so that it publishes Order and Restaurant domain events. One option is to modify all the places in the monolith that update Orders and Restaurants to publish high-level domain events. The second option is to tail the transaction log to replicate the changes as events. In this particular scenario, we need to synchronize the two databases. We don’t require the FTGO monolith to publish high-level domain events, so either approach is fine.

Delayed Order Service实现事件处理程序,这些处理程序订阅来自整体式架构的事件并更新其 AND 实体。事件处理程序的详细信息取决于整体式应用是发布特定的高级事件还是低级事件 更改事件。在任何一种情况下,您都可以将事件处理程序视为在整体式应用的边界上下文中转换事件 更新服务的界定上下文中的实体。OrderRestaurant

Delayed Order Service implements event handlers that subscribe to events from the monolith and update its Order and Restaurant entities. The details of the event handlers depend on whether the monolith publishes specific high-level events or low-level change events. In either case, you can think of an event handler as translating an event in the monolith’s bounded context to the update of an entity in the service’s bounded context.

使用副本的一个重要好处是,它可以有效地查询订单和餐厅营业时间。但是,一个缺点是它更复杂。另一个 缺点是它需要 MonoLith 来发布必要的 AND 事件。幸运的是,因为只需要本质上是 and 表的列子集的内容,所以我们不应该遇到 Section 13.3.1 中描述的问题。Delayed Order ServiceOrderRestaurantDelayed Delivery ServiceORDERSRESTAURANT

An important benefit of using a replica is that it enables Delayed Order Service to efficiently query the orders and the restaurant opening hours. One drawback, however, is that it’s more complex. Another drawback is that it requires the monolith to publish the necessary Order and Restaurant events. Fortunately, because Delayed Delivery Service only needs what’s essentially a subset of the columns of the ORDERS and RESTAURANT tables, we shouldn’t encounter the problems described in section 13.3.1.

将延迟订单管理等新功能作为独立服务实施可加速其开发、测试、 和部署。更重要的是,它使您能够使用全新的技术堆栈而不是整体式 更老的。它还可以阻止 Monolith 增长。延迟订单管理只是计划推出的众多新功能之一 FTGO 应用程序。FTGO 团队可以将其中许多功能作为单独的服务来实现。

Implementing a new feature such as delayed order management as a standalone service accelerates its development, testing, and deployment. What’s more, it enables you to implement the feature using a brand new technology stack instead of the monolith’s older one. It also stops the monolith from growing. Delayed order management is just one of many new features planned for the FTGO application. The FTGO team can implement many of these features as separate services.

遗憾的是,您无法将所有更改作为新服务实施。很多时候,您必须对整体式应用进行大量更改 实现新功能或更改现有功能。任何涉及 Monolith 的开发都很可能是缓慢的,并且 痛苦。如果要加速这些功能的交付,则必须通过迁移功能来分解整体 从单体式架构转变为服务。让我们看看如何做到这一点。

Unfortunately, you can’t implement all changes as new services. Quite often you must make extensive changes to the monolith to implement new features or change existing features. Any development involving the monolith will mostly likely be slow and painful. If you want to accelerate the delivery of these features, you must break up the monolith by migrating functionality from the monolith into services. Let’s look at how to do that.

13.5. 打破整体式架构:提取交付管理

13.5. Breaking apart the monolith: extracting delivery management

要加速整体式应用程序实现的功能的交付,您需要将整体式应用程序分解为服务。 例如,假设您希望通过实施新的路由算法来增强 FTGO 交付管理。A 专业 开发交付管理的障碍在于它与订单管理纠缠在一起,并且是整体代码的一部分 基础。开发、测试和部署交付管理可能会很慢。为了加速其发展, 您需要将投放管理提取到 .Delivery Service

To accelerate the delivery of features that are implemented by a monolith, you need to break up the monolith into services. For example, let’s imagine that you want to enhance FTGO delivery management by implementing a new routing algorithm. A major obstacle to developing delivery management is that it’s entangled with order management and is part of the monolithic code base. Developing, testing, and deploying delivery management is likely to be slow. In order to accelerate its development, you need to extract delivery management into a Delivery Service.

在本节开始时,我将介绍交付管理以及它当前如何嵌入到整体式架构中。接下来我讨论 新的独立 API 及其 API 的设计。然后,我将介绍如何与 FTGO 单体式架构协作。最后,我谈谈我们需要对 monolith 进行的一些更改以支持 .Delivery ServiceDelivery ServiceDelivery Service

I start this section by describing delivery management and how it’s currently embedded within the monolith. Next I discuss the design of the new, standalone Delivery Service and its API. I then describe how Delivery Service and the FTGO monolith collaborate. Finally I talk about some of the changes we need to make to the monolith to support Delivery Service.

让我们从回顾现有设计开始。

Let’s begin by reviewing the existing design.

13.5.1. 现有投放管理功能概述

13.5.1. Overview of existing delivery management functionality

配送管理负责安排快递员在餐厅取餐并将其交付给消费者。 每个快递员都有一个计划,该计划是取件和配送操作的计划。取货操作指示 在特定时间从餐厅取餐。deliver 操作指示 将订单交付给消费者。每当下订单、取消或修改时,都会修改计划,并将位置 以及快递员更改的可用性。CourierCourier

Delivery management is responsible for scheduling the couriers that pick up orders at restaurants and deliver them to consumers. Each courier has a plan that is a schedule of pickup and deliver actions. A pickup action tells the Courier to pick up an order from a restaurant at a particular time. A deliver action tells the Courier to deliver an order to a consumer. The plans are revised whenever orders are placed, canceled, or revised, and as the location and availability of couriers changes.

交付管理是 FTGO 应用程序中最古老的部分之一。如图 13.16 所示,它嵌入在 Order Management 中。用于管理投放的大部分代码都在 中。此外,没有显式表示 .它嵌入在实体中,该实体具有各种与投放相关的字段,如 和 。OrderServiceDeliveryOrderscheduledPickupTimescheduledDeliveryTime

Delivery management is one of the oldest parts of the FTGO application. As figure 13.16 shows, it’s embedded within order management. Much of the code for managing deliveries is in OrderService. What’s more, there’s no explicit representation of a Delivery. It’s embedded within the Order entity, which has various delivery-related fields, such as scheduledPickupTime and scheduledDeliveryTime.

图 13.16.交付管理与 FTGO 整体中的订单管理纠缠在一起。

整体式应用程序实施的许多命令都会调用交付管理,包括以下内容:

Numerous commands implemented by the monolith invoke delivery management, including the following:

  • acceptOrder()当餐厅接受订单并承诺在特定时间之前准备订单时调用。此操作调用 delivery 管理以安排投放。
  • acceptOrder()Invoked when a restaurant accepts an order and commits to preparing it by a certain time. This operation invokes delivery management to schedule a delivery.
  • cancelOrder()当消费者取消订单时调用。如有必要,它会取消投放。
  • cancelOrder()Invoked when a consumer cancels an order. If necessary, it cancels the delivery.
  • noteCourierLocationUpdated()由 Courier 的移动应用程序调用以更新 Courier 的位置。它会触发投放的重新安排。
  • noteCourierLocationUpdated()Invoked by the courier’s mobile application to update the courier’s location. It triggers the rescheduling of deliveries.
  • noteCourierAvailabilityChanged()由 Courier 的移动应用程序调用以更新 Courier 的可用性。它会触发投放的重新安排。
  • noteCourierAvailabilityChanged()Invoked by the courier’s mobile application to update the courier’s availability. It triggers the rescheduling of deliveries.

此外,各种查询还会检索由 delivery management 维护的数据,包括以下内容:

Also, various queries retrieve data maintained by delivery management, including the following:

  • getCourierPlan()由 Courier 的移动应用程序调用并返回 Courier 的计划
  • getCourierPlan()Invoked by the courier’s mobile application and returns the courier’s plan
  • getOrderStatus()返回订单的状态,其中包括与配送相关的信息,例如分配的快递员和 ETA
  • getOrderStatus()Returns the order’s status, which includes delivery-related information such as the assigned courier and the ETA
  • getOrderHistory()返回与多个订单外类似的信息getOrderStatus()
  • getOrderHistory()Returns similar information as getOrderStatus() except about multiple orders

正如 Section 13.2.3 中所述,通常提取到 Service 中的内容是一个完整的垂直切片,控制器位于顶部,数据库表位于底部。我们可以将与 相关的命令和查询视为投放管理的一部分。毕竟,交付管理创建了快递计划 并且是位置和可用性信息的主要使用者。但为了最大限度地减少开发工作,我们将这些操作保留在 monolith 的 Array 应用程序,只需提取算法的核心即可。因此,的第一次迭代不会公开可公开访问的 API。相反,它只会由 Monolith 调用。接下来,让我们探索一下设计 之。CourierCourierDelivery ServiceDelivery Service

Quite often what’s extracted into a service is, as mentioned in section 13.2.3, an entire vertical slice, with controllers at the top and database tables at the bottom. We could consider the Courier-related commands and queries to be part of delivery management. After all, delivery management creates the courier plans and is the primary consumer of the Courier location and availability information. But in order to minimize the development effort, we’ll leave those operations in the monolith and just extract the core of the algorithm. Consequently, the first iteration of Delivery Service won’t expose a publicly accessible API. Instead, it will only be invoked by the monolith. Next, let’s explore the design of Delivery Service.

13.5.2. Delivery Service 概述

13.5.2. Overview of Delivery Service

提议的 new 负责计划、重新安排和取消交货。图 13.17 显示了提取 后 FTGO 应用程序架构的高级视图。该架构由 FTGO 整体式架构和 .他们使用集成胶水进行协作,该胶水由服务和整体式应用中的 API 组成。 拥有自己的域模型和数据库。Delivery ServiceDelivery ServiceDelivery ServiceDelivery Service

The proposed new Delivery Service is responsible for scheduling, rescheduling, and canceling deliveries. Figure 13.17 shows a high-level view of the architecture of the FTGO application after extracting Delivery Service. The architecture consists of the FTGO monolith and Delivery Service. They collaborate using the integration glue, which consists of APIs in both the service and monolith. Delivery Service has its own domain model and database.

图 13.17.提取 后 FTGO 应用程序的高级视图。FTGO 单体式应用并使用集成胶水进行协作,该胶水由每个 API 组成。需要做出的两个关键决定 哪些功能和数据被移动到 API 中,以及 Monolith 如何通过 API 进行协作?Delivery ServiceDelivery ServiceDelivery ServiceDelivery Service

为了充实此体系结构并确定服务的域模型,我们需要回答以下问题:

In order to flesh out this architecture and determine the service’s domain model, we need to answer the following questions:

  • 哪些行为和数据被移动到 ?Delivery Service
  • Which behavior and data are moved to Delivery Service?
  • 向 MonoLith 公开什么 API?Delivery Service
  • What API does Delivery Service expose to the monolith?
  • Monolith 向什么 API 公开?Delivery Service
  • What API does the monolith expose to Delivery Service?

这些问题是相互关联的,因为整体式应用程序和服务之间的责任分配会影响 蜜蜂属。例如,需要调用整体式应用程序提供的 API 来访问整体式应用程序数据库中的数据,反之亦然。后 我将介绍实现 FTGO 单体式应用协作的集成胶水的设计。但首先,让我们看看 的域模型的设计。Delivery ServiceDelivery ServiceDelivery Service

These issues are interrelated because the distribution of responsibilities between the monolith and the service affects the APIs. For instance, Delivery Service will need to invoke an API provided by the monolith to access the data in the monolith’s database and vice versa. Later, I’ll describe the design of the integration glue that enables Delivery Service and the FTGO monolith to collaborate. But first, let’s look at the design of Delivery Service’s domain model.

13.5.3. 设计 Delivery Service 域模型

13.5.3. Designing the Delivery Service domain model

为了能够提取 Delivery Management,我们首先需要确定实现它的类。一旦我们完成了这些, 我们可以决定移动到哪些类来形成它的 domain logic。在某些情况下,我们需要拆分类。我们还需要决定要在哪些数据之间进行复制 服务和整体式应用程序。Delivery Service

To be able to extract delivery management, we first need to identify the classes that implement it. Once we’ve done that, we can decide which classes to move to Delivery Service to form its domain logic. In some cases, we’ll need to split classes. We’ll also need to decide which data to replicate between the service and the monolith.

让我们从确定实现投放管理的类开始。

Let’s start by identifying the classes that implement delivery management.

确定哪些实体及其字段是投放管理的一部分

设计过程的第一步是仔细审查投放管理代码并确定参与实体及其字段。图 13.18 显示了属于投放管理的实体和字段。某些字段是投放计划算法的输入, 其他是输出。该图显示了哪些字段也被 整体。Delivery Service

The first step in the process of designing Delivery Service is to carefully review the delivery management code and identify the participating entities and their fields. Figure 13.18 shows the entities and fields that are part of delivery management. Some fields are inputs to the delivery-scheduling algorithm, and others are the outputs. The figure shows which of those fields are also used by other functionality implemented by the monolith.

图 13.18.由 Delivery Management 访问的实体和字段以及整体式应用程序实现的其他功能。A 字段 可以读取或写入,或两者兼而有之。它可以通过 Delivery Management 和/或 Monolith 进行访问。

投放计划算法读取各种属性,包括 、 、 和当前计划。它会更新 的计划、 的 和 。如您所见,Delivery Management 使用的字段也被 Monolith 使用。OrderrestaurantpromisedDeliveryTimedeliveryAddressCourierlocationavailabilityCourierOrderscheduledPickupTimescheduledDeliveryTime

The delivery scheduling algorithm reads various attributes including the Order’s restaurant, promisedDeliveryTime, and deliveryAddress, and the Courier’s location, availability, and current plans. It updates the Courier’s plans, the Order’s scheduledPickupTime, and scheduledDeliveryTime. As you can see, the fields used by delivery management are also used by the monolith.

决定将哪些数据迁移到 Delivery Service

现在,我们已经确定了哪些实体和字段参与了投放管理,下一步是确定哪些实体和字段 他们我们应该搬到服务。在理想情况下,服务访问的数据由服务独占使用。 因此,我们只需将这些数据移动到服务即可完成。可悲的是,事情很少这么简单,这种情况也不例外。 投放管理使用的所有实体和字段也被整体式应用实施的其他功能使用。

Now that we’ve identified which entities and fields participate in delivery management, the next step is to decide which of them we should move to the service. In an ideal scenario, the data accessed by the service is used exclusively by the service, so we could simply move that data to the service and be done. Sadly, it’s rarely that simple, and this situation is no exception. All the entities and fields used by the delivery management are also used by other functionality implemented by the monolith.

因此,在确定要将哪些数据移动到服务时,我们需要记住两个问题。首先是:如何 服务访问保留在整体式架构中的数据?第二个是:整体式应用如何访问移动到 服务?此外,如前面的 Section 13.3 所述,我们需要仔细考虑如何维护服务和 monolith 之间的数据一致性。

As a result, when determining which data to move to the service, we need to keep in mind two issues. The first is: how does the service access the data that remains in the monolith? The second is: how does the monolith access data that’s moved to the service? Also, as described earlier in section 13.3, we need to carefully consider how to maintain data consistency between the service and the monolith.

的主要职责是管理快递计划和更新 和 字段。因此,它拥有这些油田是有道理的。我们还可以将 and 字段移动到 。但是因为我们正在努力进行尽可能小的更改,所以我们现在将这些字段保留在 mono 中。Delivery ServiceOrderscheduledPickupTimescheduledDeliveryTimeCourier.locationCourier.availabilityDelivery Service

The essential responsibility of Delivery Service is managing courier plans and updating the Order’s scheduledPickupTime and scheduledDeliveryTime fields. It makes sense, therefore, for it to own those fields. We could also move the Courier.location and Courier.availability fields to Delivery Service. But because we’re trying to make the smallest possible change, we’ll leave those fields in the monolith for now.

Delivery Service 域逻辑的设计

图 13.19 显示了 域模型的设计。服务的核心由域类组成,例如 和 。该类是投放管理业务逻辑的入口点。它实现 and 接口,这些接口由 和 调用,本节稍后将介绍。Delivery ServiceDeliveryCourierDeliveryServiceImplDeliveryServiceCourierServiceDeliveryServiceEventsHandlerDeliveryServiceNotificationsHandlers

Figure 13.19 shows the design of the Delivery Service’s domain model. The core of the service consists of domain classes such as Delivery and Courier. The DeliveryServiceImpl class is the entry point into the delivery management business logic. It implements the DeliveryService and CourierService interfaces, which are invoked by DeliveryServiceEventsHandler and DeliveryServiceNotificationsHandlers, described later in this section.

图 13.19.的域模型的设计Delivery Service

交付管理业务逻辑主要是从整体式应用复制的代码。例如,我们将实体从整体复制到 ,将其重命名为 ,并删除除投放管理使用的字段之外的所有字段。我们还将复制实体并删除其大部分字段。为了开发 的域逻辑,我们需要将代码从单体中解开。我们需要打破许多依赖项,这很可能是时间 消费。同样,在使用静态类型语言时,重构代码要容易得多,因为编译器会 做你的朋友。OrderDelivery ServiceDeliveryCourierDelivery Service

The delivery management business logic is mostly code copied from the monolith. For example, we’ll copy the Order entity from the monolith to Delivery Service, rename it to Delivery, and delete all fields except those used by delivery management. We’ll also copy the Courier entity and delete most of its fields. In order to develop the domain logic for Delivery Service, we will need to untangle the code from the monolith. We’ll need to break numerous dependencies, which is likely to be time consuming. Once again, it’s a lot easier to refactor code when using a statically typed language, because the compiler will be your friend.

Delivery Service不是独立服务。让我们看看支持和 FTGO 单体式应用协作的集成胶水的设计。Delivery Service

Delivery Service is not a standalone service. Let’s look at the design of the integration glue that enables Delivery Service and the FTGO monolith to collaborate.

13.5.4. Delivery Service 集成胶水的设计

13.5.4. The design of the Delivery Service integration glue

需要调用 FTGO 整体式应用来管理投放。整体式应用还需要与 交换数据。这种协作是通过集成胶水实现的。图 13.20 显示了集成胶水的设计。 具有投放管理 API。它还发布和域事件。FTGO 单体式应用发布域事件。Delivery ServiceDelivery ServiceDelivery ServiceDelivery ServiceDeliveryCourierCourier

The FTGO monolith needs to invoke Delivery Service to manage deliveries. The monolith also needs to exchange data with Delivery Service. This collaboration is enabled by the integration glue. Figure 13.20 shows the design of the Delivery Service integration glue. Delivery Service has a delivery management API. It also publishes Delivery and Courier domain events. The FTGO monolith publishes Courier domain events.

图 13.20.集成胶水的设计。 具有投放管理 API。该服务和 FTGO 单体通过交换域事件来同步数据。Delivery ServiceDelivery Service

让我们看看集成胶水的每个部分的设计,从 的 API 开始,用于管理投放。Delivery Service

Let’s look at the design of each part of the integration glue, starting with Delivery Service’s API for managing deliveries.

配送服务 API 的设计

Delivery Service必须提供一个 API,使整体式应用能够计划、修改和取消投放。正如您在整个过程中所看到的 book,首选方法是使用异步消息传递,因为它会促进松散耦合并提高可用性。 一种方法是订阅 Monolith 发布的域事件。根据事件的类型,它会创建、修订和取消 .这种方法的一个好处是,整体式应用不需要显式调用 .依赖 domain events 的缺点是它需要知道每个事件如何影响相应的 .Delivery ServiceOrderDeliveryDelivery ServiceDelivery ServiceOrderDelivery

Delivery Service must provide an API that enables the monolith to schedule, revise, and cancel deliveries. As you’ve seen throughout this book, the preferred approach is to use asynchronous messaging, because it promotes loose coupling and increases availability. One approach is for Delivery Service to subscribe to Order domain events published by the monolith. Depending on the type of the event, it creates, revises, and cancels a Delivery. A benefit of this approach is that the monolith doesn’t need to explicitly invoke Delivery Service. The drawback of relying on domain events is that it requires Delivery Service to know how each Order event impacts the corresponding Delivery.

更好的方法是实施基于通知的 API,使整体能够明确告知创建、修订和取消投放。的 API 由消息通知通道和三种消息类型组成:、 、 或 。通知消息包含 所需的信息。例如,通知包含取件时间和地点以及配送时间和地点。这种方法的一个重要好处 是没有生命周期的详细知识。它完全专注于管理交付,对订单一无所知。Delivery ServiceDelivery ServiceDelivery ServiceScheduleDeliveryReviseDeliveryCancelDeliveryOrderDelivery ServiceScheduleDeliveryDelivery ServiceOrder

A better approach is for Delivery Service to implement a notification-based API that enables the monolith to explicitly tell Delivery Service to create, revise, and cancel deliveries. Delivery Service’s API consists of a message notification channel and three message types: ScheduleDelivery, ReviseDelivery, or CancelDelivery. A notification message contains Order information needed by Delivery Service. For example, a ScheduleDelivery notification contains the pickup time and location and the delivery time and location. An important benefit of this approach is that Delivery Service doesn’t have detailed knowledge of the Order lifecycle. It’s entirely focused on managing deliveries and has no knowledge of orders.

此 API 并不是 FTGO 单体式应用协作的唯一方式。他们还需要交换数据。Delivery Service

This API isn’t the only way that Delivery Service and the FTGO monolith collaborate. They also need to exchange data.

Delivery Service 如何访问 FTGO 整体式架构的数据

Delivery Service需要访问整体式应用拥有的位置和可用性数据。因为这可能是大量数据,所以它不是 实际的,以便服务重复查询整体式应用。相反,更好的方法是让 Monolith 复制 data 通过发布域事件和 . 具有订阅域事件并更新其版本的 .它还可能会触发配送的重新安排。CourierDelivery ServiceCourierCourierLocationUpdatedCourierAvailabilityUpdatedDelivery ServiceCourierEventSubscriberCourier

Delivery Service needs to access the Courier location and availability data, which is owned by the monolith. Because that’s potentially a large amount of data, it’s not practical for the service to repeatedly query the monolith. Instead, a better approach is for the monolith to replicate the data to Delivery Service by publishing Courier domain events, CourierLocationUpdated and CourierAvailabilityUpdated. Delivery Service has a CourierEventSubscriber that subscribes to the domain events and updates its version of the Courier. It might also trigger the rescheduling of deliveries.

FTGO 单体如何访问 Delivery Service 数据

FTGO 整体式应用程序需要读取已移动到 的数据,例如计划。理论上,整体式应用程序可以查询服务,但这需要对整体式应用程序进行大量更改。为了 因此,更容易保持整体的域模型和数据库架构不变,并从服务中复制数据 回到巨石上。Delivery ServiceCourier

The FTGO monolith needs to read the data that’s been moved to Delivery Service, such as the Courier plans. In theory, the monolith could query the service, but that requires extensive changes to the monolith. For the time being, it’s easier to leave the monolith’s domain model and database schema unchanged and replicate data from the service back to the monolith.

实现此目的的最简单方法是发布和域事件。该服务在更新 的计划时发布事件,并在更新 .整体式应用程序使用这些域事件并更新其数据库。Delivery ServiceCourierDeliveryCourierPlanUpdatedCourierDeliveryScheduleUpdateDelivery

The easiest way to accomplish that is for Delivery Service to publish Courier and Delivery domain events. The service publishes a CourierPlanUpdated event when it updates a Courier’s plan, and a DeliveryScheduleUpdate event when it updates a Delivery. The monolith consumes these domain events and updates its database.

现在我们已经了解了 FTGO 单体式应用和交互方式,让我们看看如何更改单体式应用。Delivery Service

Now that we’ve looked at how the FTGO monolith and Delivery Service interact, let’s see how to change the monolith.

13.5.5. 更改 FTGO 单体以与 Delivery Service 交互

13.5.5. Changing the FTGO monolith to interact with Delivery Service

在许多方面,实施是提取过程中更容易的部分。修改 FTGO 单体要困难得多。幸运的是,复制 从服务返回到整体式应用的数据会减小更改的大小。但我们仍然需要改变 monolith 来管理 通过调用 .让我们看看如何做到这一点。Delivery ServiceDelivery Service

In many ways, implementing Delivery Service is the easier part of the extraction process. Modifying the FTGO monolith is much more difficult. Fortunately, replicating data from the service back to the monolith reduces the size of the change. But we still need to change the monolith to manage deliveries by invoking Delivery Service. Let’s look at how to do that.

定义 DeliveryService 接口

第一步是使用与基于消息传递的 API 对应的 Java 接口封装投放管理代码 之前定义。此界面如图 13.21 所示,它定义了用于计划、重新计划和取消投放的方法。最终,我们将使用将消息发送到传送服务的代理来实现此接口。但最初,我们将 使用调用 Delivery Management 代码的类实施此 API。

The first step is to encapsulate the delivery management code with a Java interface corresponding to the messaging-based API defined earlier. This interface, shown in figure 13.21, defines methods for scheduling, rescheduling, and canceling deliveries. Eventually, we’ll implement this interface with a proxy that sends messages to the delivery service. But initially, we’ll implement this API with a class that calls the delivery management code.

图 13.21.第一步是定义 ,这是一个粗粒度的远程 API,用于调用交付管理逻辑。DeliveryService

该接口是一个粗粒度的接口,非常适合由 IPC 机制实现。它定义了 、 和 方法,这些方法对应于前面定义的通知消息类型。DeliveryServiceschedule()reschedule()cancel()

The DeliveryService interface is a coarse-grained interface that’s well suited to being implemented by an IPC mechanism. It defines schedule(), reschedule(), and cancel() methods, which correspond to the notification message types defined earlier.

重构整体式应用程序以调用 DeliveryService 接口

接下来,如图 13.22 所示,我们需要识别 FTGO 单体中调用交付管理的所有位置,并更改它们以使用接口。这可能需要一些时间,并且是从整体式架构中提取服务最具挑战性的方面之一。DeliveryService

Next, as figure 13.22 shows, we need to identify all the places in the FTGO monolith that invoke delivery management and change them to use the DeliveryService interface. This may take some time and is one of the most challenging aspects of extracting a service from the monolith.

图 13.22.第二步是更改 FTGO 单体,以通过接口调用交付管理。DeliveryService

如果整体式应用是用静态类型语言(如 Java)编写的,这肯定会有所帮助,因为这些工具可以做得更好 识别依赖关系的工作。如果没有,那么希望您有一些自动化测试,并且对部分有足够覆盖 需要更改的代码。

It certainly helps if the monolith is written in a statically typed language, such as Java, because the tools do a better job of identifying dependencies. If not, then hopefully you have some automated tests with sufficient coverage of the parts of the code that need to be changed.

实现 DeliveryService 接口

最后一步是将类替换为将通知消息发送到 standalone 的代理。但是,我们不是立即丢弃现有的 implementation,而是使用 图 13.23 所示的设计,它使 monolith 能够在现有 implementation 和 之间动态切换。我们将使用一个类来实现接口,该类使用动态功能切换来确定是调用现有实现还是 .DeliveryServiceImplDelivery ServiceDelivery ServiceDeliveryServiceDelivery Service

The final step is to replace the DeliveryServiceImpl class with a proxy that sends notification messages to the standalone Delivery Service. But rather than discard the existing implementation right away, we’ll use a design, shown in figure 13.23, that enables the monolith to dynamically switch between the existing implementation and Delivery Service. We’ll implement the DeliveryService interface with a class that uses a dynamic feature toggle to determine whether to invoke the existing implementation or Delivery Service.

图 13.23.最后一步是使用 send messages 的代理类实现。功能切换控制 FTGO 单体是使用旧实现还是新 .DeliveryServiceDelivery ServiceDelivery Service

使用功能切换可显著降低推出 的风险。我们可以部署和测试它。然后,一旦我们确定它有效,我们就可以翻转切换开关以将流量路由到它。如果我们随后发现它没有按预期工作,我们可以切换回旧的实现。Delivery ServiceDelivery ServiceDelivery Service

Using a feature toggle significantly reduces the risk of rolling out Delivery Service. We can deploy Delivery Service and test it. And then, once we’re sure it works, we can flip the toggle to route traffic to it. If we then discover that Delivery Service isn’t working as expected, we can switch back to the old implementation.

关于功能切换

功能切换功能标志允许您部署代码更改,而不必将其发布给用户。它们还允许您动态更改 行为。Martin Fowler 的这篇文章对这个主题进行了很好的概述:https://martinfowler.com/articles/feature-toggles.html

Feature toggles, or feature flags, let you deploy code changes without necessarily releasing them to users. They also enable you to dynamically change the behavior of the application by deploying new code. This article by Martin Fowler provides an excellent overview of the topic: https://martinfowler.com/articles/feature-toggles.html.

一旦我们确定它按预期工作,我们就可以从整体中删除交付管理代码。Delivery Service

Once we’re sure that Delivery Service is working as expected, we can then remove the delivery management code from the monolith.

Delivery Service并且是 FTGO 团队在微服务架构之旅中将开发的服务示例。哪里 他们在实施这些服务后根据业务的优先级进行下一步。一种可能的路径是 extract ,如第 7 章所述。提取此服务可部分消除将数据复制回整体式架构的需要。Delayed Order ServiceOrder History ServiceDelivery Service

Delivery Service and Delayed Order Service are examples of the services that the FTGO team will develop during their journey to the microservice architecture. Where they go next after implementing these services depends on the priorities of the business. One possible path is to extract Order History Service, described in chapter 7. Extracting this service partially eliminates the need for Delivery Service to replicate data back to the monolith.

实施后,FTGO 团队可以按照第 13.3.2 节中描述的顺序提取服务:、、 、 等。随着 FTGO 团队提取每个服务,他们应用程序的可维护性和可测试性逐渐提高, 并且他们的开发速度会增加。Order History ServiceOrder ServiceConsumer ServiceKitchen Service

After implementing Order History Service, the FTGO team can then extract the services in the order described in section 13.3.2: Order Service, Consumer Service, Kitchen Service, and so on. As the FTGO team extracts each service, the maintainability and testability of their application gradually improves, and their development velocity increases.

总结

Summary

  • 在迁移到微服务架构之前,请务必确保您的软件交付问题是 已经超越了你的整体架构。您或许能够通过改进软件开发来加快交付速度 过程。
  • Before migrating to a microservice architecture, it’s important to be sure that your software delivery problems are a result of having outgrown your monolithic architecture. You might be able to accelerate delivery by improving your software development process.
  • 通过逐步开发 strangler 应用程序迁移到微服务非常重要。扼杀者应用程序是 一个由您围绕现有整体式应用程序构建的微服务组成的新应用程序。您应该演示 值,以确保业务支持迁移工作。
  • It’s important to migrate to microservices by incrementally developing a strangler application. A strangler application is a new application consisting of microservices that you build around the existing monolithic application. You should demonstrate value early and often in order to ensure that the business supports the migration effort.
  • 将微服务引入架构的一种好方法是将新功能作为服务实现。这样做使您能够 使用现代技术和开发流程快速轻松地开发功能。这是快速演示的好方法 迁移到微服务的价值。
  • A great way to introduce microservices into your architecture is to implement new features as services. Doing so enables you to quickly and easily develop a feature using a modern technology and development process. It’s a good way to quickly demonstrate the value of migrating to microservices.
  • 分解 Mono 的一种方法是将表示层与后端分开,这会产生两个较小的 Monolith。 虽然这不是一个巨大的改进,但它确实意味着您可以独立部署每个 Monolith。这允许,例如, UI 团队可以更轻松地迭代 UI 设计,而不会影响后端。
  • One way to break up the monolith is to separate the presentation tier from the backend, which results in two smaller monoliths. Although it’s not a huge improvement, it does mean that you can deploy each monolith independently. This allows, for example, the UI team to iterate more easily on the UI design without impacting the backend.
  • 分解 Monolith 的主要方法是将功能从 Mono 逐步迁移到 Services。这很重要 专注于提取提供最大好处的服务。例如,如果您将 实现正在积极开发的功能的服务。
  • The main way to break up the monolith is by incrementally migrating functionality from the monolith into services. It’s important to focus on extracting the services that provide the most benefit. For example, you’ll accelerate development if you extract a service that implements functionality that’s being actively developed.
  • 新开发的服务几乎总是必须与 Monolith 交互。服务通常需要访问整体式应用程序的数据 并调用其功能。整体式应用程序有时需要访问服务的数据并调用其功能。实施 此协作开发了集成胶水,它由整体式应用中的入站和出站适配器组成。
  • Newly developed services almost always have to interact with the monolith. A service often needs to access a monolith’s data and invoke its functionality. The monolith sometimes needs to access a service’s data and invoke its functionality. To implement this collaboration, develop integration glue, which consists of inbound and outbound adapters in the monolith.
  • 为了防止整体式应用的域模型污染服务的域模型,集成胶水应使用防损坏 层,这是在域模型之间进行转换的软件层。
  • To prevent the monolith’s domain model from polluting the service’s domain model, the integration glue should use an anti-corruption layer, which is a layer of software that translates between domain models.
  • 将提取服务对整体式应用的影响降至最低的一种方法是复制已移动到服务的数据 返回到 Monolith 的数据库。由于整体式架构保持不变,因此无需将 对整体式代码库的广泛更改。
  • One way to minimize the impact on the monolith of extracting a service is to replicate the data that was moved to the service back to the monolith’s database. Because the monolith’s schema is left unchanged, this eliminates the need to make potentially widespread changes to the monolith code base.
  • 开发服务通常需要您实施涉及整体式架构的 saga。但实施起来可能具有挑战性 需要对整体式应用进行广泛更改的可补偿事务。因此,您有时需要小心 对服务的提取进行排序,以避免在整体式架构中实现可补偿事务。
  • Developing a service often requires you to implement sagas that involve the monolith. But it can be challenging to implement a compensatable transaction that requires making widespread changes to the monolith. Consequently, you sometimes need to carefully sequence the extraction of services to avoid implementing compensatable transactions in the monolith.
  • 重构为微服务架构时,您需要同时支持整体式应用程序的现有 security 机制(通常基于内存中会话)和服务使用的基于令牌的安全机制。 幸运的是,一个简单的解决方案是修改整体式应用的登录处理程序,以生成包含安全令牌的 cookie。 然后由 API 网关转发到服务。
  • When refactoring to a microservice architecture, you need to simultaneously support the monolithic application’s existing security mechanism, which is often based on an in-memory session, and the token-based security mechanism used by the services. Fortunately, a simple solution is to modify the monolith’s login handler to generate a cookie containing a security token, which is then forwarded to the services by the API gateway.

 

 

模式列表

List of Patterns

应用程序架构模式

Application architecture patterns

整体式建筑 (40)

Monolithic architecture (40)

微服务架构 (40)

Microservice architecture (40)

分解模式

Decomposition patterns

按业务功能分解 (51)

Decompose by business capability (51)

按子域分解 (54)

Decompose by subdomain (54)

消息样式模式

Messaging style patterns

消息 (85)

Messaging (85)

远程过程调用 (72)

Remote procedure invocation (72)

可靠的通信模式

Reliable communications patterns

断路器 (78)

Circuit breaker (78)

服务发现模式

Service discovery patterns

第三方注册 (85)

3rd party registration (85)

客户端发现 (83)

Client-side discovery (83)

自助注册 (82)

Self-registration (82)

服务器端发现 (85)

Server-side discovery (85)

事务型消息传递模式

Transactional messaging patterns

民意调查出版商 (98)

Polling publisher (98)

事务日志拖尾 (99)

Transaction log tailing (99)

事务性发件箱 (98)

Transactional outbox (98)

数据一致性模式

Data consistency patterns

佐贺县 (114)

Saga (114)

业务逻辑设计模式

Business logic design patterns

骨料 (150)

Aggregate (150)

域事件 (160)

Domain event (160)

域模型 (150)

Domain model (150)

事件溯源 (184)

Event sourcing (184)

事务脚本 (149)

Transaction script (149)

查询模式

Querying patterns

API 组成 (223)

API composition (223)

命令查询责任划分 (228)

Command query responsibility segregation (228)

外部 API 模式

External API patterns

API 网关 (259)

API gateway (259)

前端后端 (265)

Backends for frontends (265)

测试模式

Testing patterns

消费者驱动的合同测试 (302)

Consumer-driven contract test (302)

消费者端合同测试 (303)

Consumer-side contract test (303)

服务组件测试 (335)

Service component test (335)

安全模式

Security patterns

访问令牌 (354)

Access token (354)

横切关注点模式

Cross-cutting concerns patterns

外部化配置 (361)

Externalized configuration (361)

微服务机箱 (379)

Microservice chassis (379)

可观测性模式

Observability patterns

应用程序指标 (373)

Application metrics (373)

审核日志记录 (377)

Audit logging (377)

分布式跟踪 (370)

Distributed tracing (370)

异常跟踪 (376)

Exception tracking (376)

运行状况检查 API (366)

Health check API (366)

日志聚合 (368)

Log aggregation (368)

部署模式

Deployment patterns

将服务部署为容器 (393)

Deploy a service as a container (393)

将服务部署为 VM (390)

Deploy a service as a VM (390)

特定于语言的打包格式 (387)

Language-specific packaging format (387)

服务网格 (380)

Service mesh (380)

无服务器部署 (416)

Serverless deployment (416)

边车 (410)

Sidecar (410)

重构为微服务模式

Refactoring to microservices patterns

防腐层 (447)

Anti-corruption layer (447)

Strangler 应用 (432)

Strangler application (432)

 

 

快速、频繁和可靠地交付大型复杂应用程序需要 DevOps 的组合,其中包括 持续交付/部署、小型自治团队和微服务架构。

The rapid, frequent, and reliable delivery of large, complex applications requires a combination of DevOps, which includes continuous delivery/deployment, small, autonomous teams, and the microservice architecture.

微服务架构将应用程序构建为一组围绕业务组织的松散耦合服务 能力。每个团队独立开发、测试和部署其服务。

The microservice architecture structures an application as a set of loosely coupled services that are organized around business capabilities. Each team develops, tests, and deploys their services independently.

指数

Index

[符号][][][][D][][][][H][][J][K][L][][编号][O][P][][R][S][T][][V][W][][Z]

[SYMBOL][A][B][C][D][E][F][G][H][I][J][K][L][M][N][O][P][Q][R][S][T][U][V][W][X][Z]

象征

SYMBOL

2PC (two-phase commit)

3rd party registration pattern2nd

4+1 view model of software architecture

500 status code, HTTP

2PC (two-phase commit)

3rd party registration pattern2nd

4+1 view model of software architecture

500 status code, HTTP

一个

A

AbstractAutowiringHttpRequestHandler class

AbstractHttpHandler class

accept() method2nd

acceptance tests

  defining

  executing specifications using Cucumber

  writing using Gherkin

acceptOrder() method

Access Token2nd3rd

ACD (Atomicity, Consistency, Durability)

ACID (Atomicity, Consistency, Isolation, Durability) transactions2nd

ACLs (access control lists)

ActiveMQ message broker

add() method

addOrder() method

AggregateRepository class

aggregates2nd3rd

  consistency boundaries

  creating, finding, and updating

  defining aggregate commands

  defining with ReflectiveMutableCommandProcessingAggregate class

  designing business logic with

  event sourcing

    aggregate history, 2nd

    aggregate methods and events

    event sourcing-based Order aggregate

    persisting aggregates using events

  event sourcing and aggregate history

  explicit boundaries

  granularity

  identifying

  Order aggregate

    methods

    state machine

    structure of

  rules for

  Ticket aggregate

    behavior of

    KitchenService domain service

    KitchenServiceCommandHandler class

    structure of Ticket class

  traditional persistence and aggregate history

aliases

Alternative pattern

AMI (Amazon Machine Image)

anomalies

Anti-corruption layer pattern

AOP (aspect-oriented programming)2nd

Apache Flume

Apache Kafka

Apache Openwhisk

Apache Shiro

API composition pattern

  benefits and drawbacks of

    increased overhead

    lack of transactional data consistency

    risk of reduced availability

  design issues

    reactive programming model

    role of API composer

  findOrder() query operation2nd

  overview of

API gateway

  authentication

  benefits of

  design issues

    being good citizen in architecture

    handling partial failures

    performance and scalability

    reactive programming abstractions

  drawbacks of

  implementation using GraphQL

    connecting schema to data

    defining schema

    executing queries

    integrating Apollo GraphQL server with Express

    optimizing loading using batching and caching

    writing client

  implementation using Netflix Zuul

  implementation using off-the-shelf products/services

    API gateway products

    AWS API gateway service

    AWS Application Load Balancer service

  implementation using Spring Cloud Gateway

    ApiGatewayApplication class

    OrderConfiguration class

    OrderHandlers class

    OrderService class

  mapping USERINFO cookie to Authorization header

  Netflix example

  overview of

    API composition

    architecture

    Backends for frontends pattern

    client-specific API

    edge functions

    ownership model

    protocol translation

    request routing

ApiGatewayApplication class

ApiGatewayMain package

APIGatewayProxyRequestEvent2nd

APIGatewayProxyResponseEvent2nd



APIs

  defining in microservice architecture

  interprocess communication

    creating specification for messaging-based service API

    major, breaking changes

    minor, backward-compatible changes

    semantic versioning

    specifying REST APIs

  refactoring to microservices2nd

  testing microservices

    consumer contract tests for messaging APIs

    consumer-side integration test for API gateway’s OrderServiceProxy

    example contract for REST API.

    See API gateways.



Application architecture patterns

  Microservice architecture2nd

  Monolithic architecture2nd3rd

application infrastructure

application metrics2nd3rd

  collecting service-level metrics

  delivering metrics to metrics service

application modernization2nd

application security

apply() method2nd

architectural styles

  hexagonal

  layered

  microservice architecture

    loose coupling, defined

    relative unimportance of size of service

    role of shared libraries

    services, defined

aspect-oriented programming (AOP)2nd

asynchronous (nonblocking) I/O model

asynchronous interactions

Asynchronous messaging pattern

  competing receivers and message ordering

  creating API specification

    documenting asynchronous operations

    documenting published events

  duplicate messages

    tracking messages and discarding duplicates

    writing idempotent message handlers

  improving availability

    eliminating synchronous interaction

    synchronous communication and availability

  interaction styles

    one-way notifications

    publish/subscribe

    request/response and asynchronous request/response

  libraries and frameworks for

    basic messaging

    command/reply-based messaging

    domain event publishing

  message brokers

    benefits and drawbacks of

    brokerless messaging

    implementing message channels using

    overview of

  overview of

  transactional messaging

    publishing events using Polling publisher pattern

    publishing events using Transaction log tailing pattern

    using database table as message queue



asynchronous request/response interactions

  implementing

  integration tests for

    consumer-side contract tests

    contract tests

    example contract

Atomicity, Consistency, Durability (ACD)

Atomicity, Consistency, Isolation, Durability (ACID) transactions2nd

attribute value

audit logging2nd3rd4th

  adding code to business logic

  aspect-oriented programming

  event sourcing

auditing



authentication and authorization

  refactoring to microservices

    API gateway maps USERINFO cookie to Authorization header

    LoginHandler sets USERINFO cookie

  security in microservice architecture

    handling authentication

    handling authorization

Authorization Server concept

automated testing2nd3rd

automatic sidecar injection

Avro

AWS API gateway service

AWS Application Load Balancer service

AWS DynamoDB

  data modeling and query design

    detecting duplicate events

    findOrderHistory query

    FTGO-order-history table

    paginating query results

    updating orders

  OrderHistoryDaoDynamoDb class

    addOrder() method

    findOrderHistory() method

    idempotentUpdate() method

    notePickedUp() method

  OrderHistoryEventHandlers module

AWS Gateway, deploying RESTful services using

  deploying lambda functions using Serverless framework

  design of Restaurant Service

  packaging service as ZIP file



AWS Lambda

  benefits of lambda functions

  developing lambda functions

  drawbacks of lambda functions

  invoking lambda functions

    defining scheduled lambda functions

    handling events

    handling HTTP requests

    invoking lambda functions using web service requests

  overview of

  RESTful services

    deploying lambda functions using Serverless framework

    design of Restaurant Service

    packaging service as ZIP file

aws.region property

Axon

Azure functions, Microsoft

AbstractAutowiringHttpRequestHandler class

AbstractHttpHandler class

accept() method2nd

acceptance tests

  defining

  executing specifications using Cucumber

  writing using Gherkin

acceptOrder() method

Access Token2nd3rd

ACD (Atomicity, Consistency, Durability)

ACID (Atomicity, Consistency, Isolation, Durability) transactions2nd

ACLs (access control lists)

ActiveMQ message broker

add() method

addOrder() method

AggregateRepository class

aggregates2nd3rd

  consistency boundaries

  creating, finding, and updating

  defining aggregate commands

  defining with ReflectiveMutableCommandProcessingAggregate class

  designing business logic with

  event sourcing

    aggregate history, 2nd

    aggregate methods and events

    event sourcing-based Order aggregate

    persisting aggregates using events

  event sourcing and aggregate history

  explicit boundaries

  granularity

  identifying

  Order aggregate

    methods

    state machine

    structure of

  rules for

  Ticket aggregate

    behavior of

    KitchenService domain service

    KitchenServiceCommandHandler class

    structure of Ticket class

  traditional persistence and aggregate history

aliases

Alternative pattern

AMI (Amazon Machine Image)

anomalies

Anti-corruption layer pattern

AOP (aspect-oriented programming)2nd

Apache Flume

Apache Kafka

Apache Openwhisk

Apache Shiro

API composition pattern

  benefits and drawbacks of

    increased overhead

    lack of transactional data consistency

    risk of reduced availability

  design issues

    reactive programming model

    role of API composer

  findOrder() query operation2nd

  overview of

API gateway

  authentication

  benefits of

  design issues

    being good citizen in architecture

    handling partial failures

    performance and scalability

    reactive programming abstractions

  drawbacks of

  implementation using GraphQL

    connecting schema to data

    defining schema

    executing queries

    integrating Apollo GraphQL server with Express

    optimizing loading using batching and caching

    writing client

  implementation using Netflix Zuul

  implementation using off-the-shelf products/services

    API gateway products

    AWS API gateway service

    AWS Application Load Balancer service

  implementation using Spring Cloud Gateway

    ApiGatewayApplication class

    OrderConfiguration class

    OrderHandlers class

    OrderService class

  mapping USERINFO cookie to Authorization header

  Netflix example

  overview of

    API composition

    architecture

    Backends for frontends pattern

    client-specific API

    edge functions

    ownership model

    protocol translation

    request routing

ApiGatewayApplication class

ApiGatewayMain package

APIGatewayProxyRequestEvent2nd

APIGatewayProxyResponseEvent2nd



APIs

  defining in microservice architecture

  interprocess communication

    creating specification for messaging-based service API

    major, breaking changes

    minor, backward-compatible changes

    semantic versioning

    specifying REST APIs

  refactoring to microservices2nd

  testing microservices

    consumer contract tests for messaging APIs

    consumer-side integration test for API gateway’s OrderServiceProxy

    example contract for REST API.

    See API gateways.



Application architecture patterns

  Microservice architecture2nd

  Monolithic architecture2nd3rd

application infrastructure

application metrics2nd3rd

  collecting service-level metrics

  delivering metrics to metrics service

application modernization2nd

application security

apply() method2nd

architectural styles

  hexagonal

  layered

  microservice architecture

    loose coupling, defined

    relative unimportance of size of service

    role of shared libraries

    services, defined

aspect-oriented programming (AOP)2nd

asynchronous (nonblocking) I/O model

asynchronous interactions

Asynchronous messaging pattern

  competing receivers and message ordering

  creating API specification

    documenting asynchronous operations

    documenting published events

  duplicate messages

    tracking messages and discarding duplicates

    writing idempotent message handlers

  improving availability

    eliminating synchronous interaction

    synchronous communication and availability

  interaction styles

    one-way notifications

    publish/subscribe

    request/response and asynchronous request/response

  libraries and frameworks for

    basic messaging

    command/reply-based messaging

    domain event publishing

  message brokers

    benefits and drawbacks of

    brokerless messaging

    implementing message channels using

    overview of

  overview of

  transactional messaging

    publishing events using Polling publisher pattern

    publishing events using Transaction log tailing pattern

    using database table as message queue



asynchronous request/response interactions

  implementing

  integration tests for

    consumer-side contract tests

    contract tests

    example contract

Atomicity, Consistency, Durability (ACD)

Atomicity, Consistency, Isolation, Durability (ACID) transactions2nd

attribute value

audit logging2nd3rd4th

  adding code to business logic

  aspect-oriented programming

  event sourcing

auditing



authentication and authorization

  refactoring to microservices

    API gateway maps USERINFO cookie to Authorization header

    LoginHandler sets USERINFO cookie

  security in microservice architecture

    handling authentication

    handling authorization

Authorization Server concept

automated testing2nd3rd

automatic sidecar injection

Avro

AWS API gateway service

AWS Application Load Balancer service

AWS DynamoDB

  data modeling and query design

    detecting duplicate events

    findOrderHistory query

    FTGO-order-history table

    paginating query results

    updating orders

  OrderHistoryDaoDynamoDb class

    addOrder() method

    findOrderHistory() method

    idempotentUpdate() method

    notePickedUp() method

  OrderHistoryEventHandlers module

AWS Gateway, deploying RESTful services using

  deploying lambda functions using Serverless framework

  design of Restaurant Service

  packaging service as ZIP file



AWS Lambda

  benefits of lambda functions

  developing lambda functions

  drawbacks of lambda functions

  invoking lambda functions

    defining scheduled lambda functions

    handling events

    handling HTTP requests

    invoking lambda functions using web service requests

  overview of

  RESTful services

    deploying lambda functions using Serverless framework

    design of Restaurant Service

    packaging service as ZIP file

aws.region property

Axon

Azure functions, Microsoft

B

B

Backends for frontends (BFF) pattern

batching

@Before setUp() method

beforeHandling() method

Big Ball of Mud pattern

big bang rewrite

binary message formats

bounded context

broker-based messaging

  benefits and drawbacks of

  implementing message channels using

  overview of

brokerless messaging

Browser API module

business capability

business logic

  adding audit logging code to

  domain events

    consuming

    defined

    event enrichment

    generating

    identifying

    publishing

    reasons to publish

  domain model design

    aggregates

    problem with fuzzy boundaries

  event sourcing

    benefits of

    drawbacks of

    event publishing

    evolving domain events

    handling concurrent updates using optimistic locking

    idempotent message processing

    overview of

    snapshots, improving performance with

    traditional persistence

  event store implementation

    Eventuate client framework for Java

    Eventuate Local event store

  Kitchen Service business logic

  Order Service business logic

    Order aggregate

    OrderService class

  organization patterns

    Domain model pattern

    domain-driven design

    Transaction script pattern

  sagas and event sourcing together

    creating orchestration-based saga

    implementing choreography-based sagas using event sourcing

    implementing event sourcing-based saga participant

    implementing saga orchestrators using event sourcing



Business logic design patterns

  Aggregate2nd

  Domain event

  Domain model

  Event sourcing

  Transaction script

business logic layer2nd

by value countermeasure

Backends for frontends (BFF) pattern

batching

@Before setUp() method

beforeHandling() method

Big Ball of Mud pattern

big bang rewrite

binary message formats

bounded context

broker-based messaging

  benefits and drawbacks of

  implementing message channels using

  overview of

brokerless messaging

Browser API module

business capability

business logic

  adding audit logging code to

  domain events

    consuming

    defined

    event enrichment

    generating

    identifying

    publishing

    reasons to publish

  domain model design

    aggregates

    problem with fuzzy boundaries

  event sourcing

    benefits of

    drawbacks of

    event publishing

    evolving domain events

    handling concurrent updates using optimistic locking

    idempotent message processing

    overview of

    snapshots, improving performance with

    traditional persistence

  event store implementation

    Eventuate client framework for Java

    Eventuate Local event store

  Kitchen Service business logic

  Order Service business logic

    Order aggregate

    OrderService class

  organization patterns

    Domain model pattern

    domain-driven design

    Transaction script pattern

  sagas and event sourcing together

    creating orchestration-based saga

    implementing choreography-based sagas using event sourcing

    implementing event sourcing-based saga participant

    implementing saga orchestrators using event sourcing



Business logic design patterns

  Aggregate2nd

  Domain event

  Domain model

  Event sourcing

  Transaction script

business logic layer2nd

by value countermeasure

C

C

caching2nd

cancel() operation

cancelOrder() method

CAP theorem

CCP (Common Closure Principle)

centralized sessions

change failure rate

choreography

choreography-based sagas

  benefits and drawbacks of

  implementing Create Order saga

  implementing using event sourcing

  reliable event-based communication

CI (Continuous Integration)2nd3rd

Circuit breaker pattern

  developing robust RPI proxies

  recovering from unavailable services

Client concept

Client-side discovery pattern

command message

Command query responsibility segregation.

    See CQRS pattern.

command/reply-based messaging

commands

commit tests stage

committed records

Common Closure Principle (CCP)



communication

  flexible

  secure interprocess

communication patterns

commutative update countermeasure

compensatable transactions2nd3rd

compensating transaction

compile-time tests

component tests2nd

  for FTGO Order Service

    OrderServiceComponentTestStepDefinitions class

    running

    writing

  in-process component tests

  out-of-process component tests

condition expression

Conduit

ConfigMap

configurable services

  pull-based externalized configuration

  push-based externalized configuration

@ConfigurationProperties class

consumer contract testing

  for asynchronous request/response interaction

  for messaging APIs

  for publish/subscribe-style interactions

  for REST-based request/response style interactions

consumer group

consumer-driven contract test2nd

consumerId parameter

consumer-provider relationship

consumer-side contract test2nd



containers

  container image

  Deploy a service as a container2nd

  Docker

continuous deployment

  deployment pipeline

Continuous Integration (CI)2nd3rd

controllers, unit tests for

Conway, Melvin

Conway’s law

correlation ID2nd

countermeasures2nd3rd

CQRS (Command query responsibility segregation)2nd3rd4th

  benefits of

    efficient implementation

    improved separation of concerns

    querying in event sourcing-based application

  drawbacks of

    more complex architecture

    replication lag

  motivations for using

    findAvailableRestaurants() query operation

    findOrderHistory() query operation

    need to separate concerns

  overview of

    query-only services

    separating commands from queries

  views

    adding and updating

    designing

    implementing with AWS DynamoDB

Create Order saga2nd

  CreateOrderSaga orchestrator

  CreateOrderSagaState class

  Eventuate Tram Saga framework

  implementing using choreography

  implementing using orchestration

  KitchenServiceProxy class

create, update, and delete (CRUD) operations

create() method2nd

createOrder() operation

CreateOrderSaga orchestrator

CreateOrderSagaState class

CreateOrderSagaTest class



Cross-cutting concerns patterns

  Externalized configuration2nd

  Microservice chassis2nd

CRUD (create, update, and delete) operations

Cucumber framework

CustomerContactInfoRepository interface2nd

caching2nd

cancel() operation

cancelOrder() method

CAP theorem

CCP (Common Closure Principle)

centralized sessions

change failure rate

choreography

choreography-based sagas

  benefits and drawbacks of

  implementing Create Order saga

  implementing using event sourcing

  reliable event-based communication

CI (Continuous Integration)2nd3rd

Circuit breaker pattern

  developing robust RPI proxies

  recovering from unavailable services

Client concept

Client-side discovery pattern

command message

Command query responsibility segregation.

    See CQRS pattern.

command/reply-based messaging

commands

commit tests stage

committed records

Common Closure Principle (CCP)



communication

  flexible

  secure interprocess

communication patterns

commutative update countermeasure

compensatable transactions2nd3rd

compensating transaction

compile-time tests

component tests2nd

  for FTGO Order Service

    OrderServiceComponentTestStepDefinitions class

    running

    writing

  in-process component tests

  out-of-process component tests

condition expression

Conduit

ConfigMap

configurable services

  pull-based externalized configuration

  push-based externalized configuration

@ConfigurationProperties class

consumer contract testing

  for asynchronous request/response interaction

  for messaging APIs

  for publish/subscribe-style interactions

  for REST-based request/response style interactions

consumer group

consumer-driven contract test2nd

consumerId parameter

consumer-provider relationship

consumer-side contract test2nd



containers

  container image

  Deploy a service as a container2nd

  Docker

continuous deployment

  deployment pipeline

Continuous Integration (CI)2nd3rd

controllers, unit tests for

Conway, Melvin

Conway’s law

correlation ID2nd

countermeasures2nd3rd

CQRS (Command query responsibility segregation)2nd3rd4th

  benefits of

    efficient implementation

    improved separation of concerns

    querying in event sourcing-based application

  drawbacks of

    more complex architecture

    replication lag

  motivations for using

    findAvailableRestaurants() query operation

    findOrderHistory() query operation

    need to separate concerns

  overview of

    query-only services

    separating commands from queries

  views

    adding and updating

    designing

    implementing with AWS DynamoDB

Create Order saga2nd

  CreateOrderSaga orchestrator

  CreateOrderSagaState class

  Eventuate Tram Saga framework

  implementing using choreography

  implementing using orchestration

  KitchenServiceProxy class

create, update, and delete (CRUD) operations

create() method2nd

createOrder() operation

CreateOrderSaga orchestrator

CreateOrderSagaState class

CreateOrderSagaTest class



Cross-cutting concerns patterns

  Externalized configuration2nd

  Microservice chassis2nd

CRUD (create, update, and delete) operations

Cucumber framework

CustomerContactInfoRepository interface2nd

D

D

DAO (data access object)2nd3rd

data access logic layer

data consistency

  API composition pattern and

  maintaining across services

  refactoring to microservices

    sagas and compensatable transactions

    sequencing extraction of services

    supporting compensatable transactions

  Saga pattern2nd

data consistency patterns

  Saga pattern2nd

DataLoader module

DDD (domain-driven design)2nd

DDD aggregate pattern

Debezium

Decompose by business capability pattern

  decomposition

  identifying business capabilities

  purpose of business capabilities

decomposition

  Decompose by subdomain

  defining application’s microservice architecture

    defining service APIs

    guidelines for decomposition

    identifying system operations

    obstacles to decomposition

    service definition with Decompose by business capability pattern

    service definition with Decompose by sub-domain pattern

  guidelines for

    Common Closure Principle

    Single Responsibility Principle

  obstacles to

    god classes

    maintaining data consistency across services

    network latency

    obtaining consistent view of data

    synchronous interprocess communication

  patterns

    Decompose by business capability, 2nd

    Decompose by subdomain, 2nd



Delayed Delivery Service

  changing FTGO monolith to interact with

    defining interface

    implementing interface

    refactoring monolith to call interface

  design for

  domain model

    deciding which data to migrate

    design of domain logic

    identifying which entities and fields are part of delivery management

  existing delivery functionality

  integration glue for2nd

    CustomerContactInfoRepository interface

    design of API

    how Delivery Service accesses FTGO data

    how FTGO accesses data

    publishing and consuming Order and Restaurant domain events

  overview of

deleted flag

deliver action

DeliveryServiceImpl class

dependencies

deploy stage

deployment

  Language-specific packaging format pattern

    benefits of

    drawbacks of

  RESTful services using AWS Lambda and AWS Gateway

    deploying lambda functions using Serverless framework

    design of Restaurant Service

    packaging service as ZIP file

  Serverless deployment pattern

    benefits of lambda functions

    developing lambda functions

    drawbacks of lambda functions

    invoking lambda functions

    overview of

  Service as container pattern

    benefits of

    Docker

    drawbacks of

  Service as virtual machine pattern

    benefits of

    drawbacks of

  Service mesh pattern

  Sidecar pattern

  with Kubernetes

    deploying API gateway

    deploying Restaurant Service

    overview of

    service meshes

    zero-downtime deployments

deployment frequency



Deployment patterns

  Deploy a service as a container2nd

  Deploy a service as a VM2nd

  Language-specific packaging format2nd

  Serverless deployment

  Service mesh

  Sidecar

deployment pipeline

Deployment view

DestinationRule

dirty reads

Distributed tracing pattern2nd3rd

  distributed tracing server

  instrumentation libraries

Distributed Transaction Processing (DTP)

Docker

  building images

  pushing images to registry

  running containers

docker build command

Docker containers

docker push command

docker run command

docker tag command

document message

domain event publishing

domain events2nd

  consuming2nd

  defined

  defining

  event enrichment

  event schema evolution

  generating

  identifying

  managing schema changes through upcasting

  publishing2nd3rd4th

  reasons to publish

  subscribing to2nd

domain model2nd

  aggregates

    consistency boundaries

    designing business logic with

    explicit boundaries

    granularity

    identifying aggregates

    rules for

  creating high-level domain model

  Delivery Service

    deciding which data to migrate

    design of domain logic

    identifying which entities and fields are part of delivery management

  problem with fuzzy boundaries

  splitting



domain services

  KitchenService

  unit tests for

domain-driven design (DDD)2nd

DSL (domain-specific language)

DTP (Distributed Transaction Processing)

dumb pipes

duplicate messages

  tracking messages and discarding duplicates

  writing idempotent message handlers

DynamoDB streams

DAO (data access object)2nd3rd

data access logic layer

data consistency

  API composition pattern and

  maintaining across services

  refactoring to microservices

    sagas and compensatable transactions

    sequencing extraction of services

    supporting compensatable transactions

  Saga pattern2nd

data consistency patterns

  Saga pattern2nd

DataLoader module

DDD (domain-driven design)2nd

DDD aggregate pattern

Debezium

Decompose by business capability pattern

  decomposition

  identifying business capabilities

  purpose of business capabilities

decomposition

  Decompose by subdomain

  defining application’s microservice architecture

    defining service APIs

    guidelines for decomposition

    identifying system operations

    obstacles to decomposition

    service definition with Decompose by business capability pattern

    service definition with Decompose by sub-domain pattern

  guidelines for

    Common Closure Principle

    Single Responsibility Principle

  obstacles to

    god classes

    maintaining data consistency across services

    network latency

    obtaining consistent view of data

    synchronous interprocess communication

  patterns

    Decompose by business capability, 2nd

    Decompose by subdomain, 2nd



Delayed Delivery Service

  changing FTGO monolith to interact with

    defining interface

    implementing interface

    refactoring monolith to call interface

  design for

  domain model

    deciding which data to migrate

    design of domain logic

    identifying which entities and fields are part of delivery management

  existing delivery functionality

  integration glue for2nd

    CustomerContactInfoRepository interface

    design of API

    how Delivery Service accesses FTGO data

    how FTGO accesses data

    publishing and consuming Order and Restaurant domain events

  overview of

deleted flag

deliver action

DeliveryServiceImpl class

dependencies

deploy stage

deployment

  Language-specific packaging format pattern

    benefits of

    drawbacks of

  RESTful services using AWS Lambda and AWS Gateway

    deploying lambda functions using Serverless framework

    design of Restaurant Service

    packaging service as ZIP file

  Serverless deployment pattern

    benefits of lambda functions

    developing lambda functions

    drawbacks of lambda functions

    invoking lambda functions

    overview of

  Service as container pattern

    benefits of

    Docker

    drawbacks of

  Service as virtual machine pattern

    benefits of

    drawbacks of

  Service mesh pattern

  Sidecar pattern

  with Kubernetes

    deploying API gateway

    deploying Restaurant Service

    overview of

    service meshes

    zero-downtime deployments

deployment frequency



Deployment patterns

  Deploy a service as a container2nd

  Deploy a service as a VM2nd

  Language-specific packaging format2nd

  Serverless deployment

  Service mesh

  Sidecar

deployment pipeline

Deployment view

DestinationRule

dirty reads

Distributed tracing pattern2nd3rd

  distributed tracing server

  instrumentation libraries

Distributed Transaction Processing (DTP)

Docker

  building images

  pushing images to registry

  running containers

docker build command

Docker containers

docker push command

docker run command

docker tag command

document message

domain event publishing

domain events2nd

  consuming2nd

  defined

  defining

  event enrichment

  event schema evolution

  generating

  identifying

  managing schema changes through upcasting

  publishing2nd3rd4th

  reasons to publish

  subscribing to2nd

domain model2nd

  aggregates

    consistency boundaries

    designing business logic with

    explicit boundaries

    granularity

    identifying aggregates

    rules for

  creating high-level domain model

  Delivery Service

    deciding which data to migrate

    design of domain logic

    identifying which entities and fields are part of delivery management

  problem with fuzzy boundaries

  splitting



domain services

  KitchenService

  unit tests for

domain-driven design (DDD)2nd

DSL (domain-specific language)

DTP (Distributed Transaction Processing)

dumb pipes

duplicate messages

  tracking messages and discarding duplicates

  writing idempotent message handlers

DynamoDB streams

E

E

edge functions

Elastic Beanstalk

Elasticsearch

@EnableGateway annotation

end-to-end tests

  designing

  running

  writing

Enterprise Service Bus (ESB)

entities, unit tests for

Entity object, DDD

enums

ESB (Enterprise Service Bus)

event.

    See Domain events.



event handlers

  events generated by AWS services

  idempotent

  unit tests for

event message

event publishing

  Asynchronous messaging pattern2nd3rd

  domain events

    consuming

    defined

    event enrichment

    generating and publishing

    identifying

    reasons for

  event sourcing2nd

  traditional persistence and

  using polling

  using transaction log tailing

event sourcing

  audit logging

  benefits of

    avoids O/R impedance mismatch problem

    preserves aggregate history

    reliable domain event publishing

    time machine for developers

  concurrent updates and optimistic locking

  drawbacks of

    complexity

    deleting data

    evolving events

    learning curve

    querying event store

  event publishing

    using polling

    using transaction log tailing

  evolving domain events

    event schema evolution

    managing schema changes through upcasting

  idempotent message processing

    with NoSQL-based event store

    with RDBMS-based event store

  overview of

    aggregate methods required to generate events

    event sourcing-based Order aggregate

    events representing state changes

    persisting aggregates using events

  sagas and

    creating orchestration-based saga

    implementing choreography-based sagas using event sourcing

    implementing event sourcing-based saga participant

    implementing saga orchestrators using event sourcing

  snapshots and performance improvement

  trouble with traditional persistence

    audit logging

    event publishing bolted to business logic

    lack of aggregate history

    Object-Relational impedance mismatch

Event Store

event store implementation

  Eventuate client framework for Java

    AggregateRepository class

    defining aggregate commands

    defining aggregates with ReflectiveMutableCommandProcessingAggregate class

    defining domain events

    subscribing to domain events

  Eventuate Local event store

    consuming events by subscribing to event broker

    event relay propagates events from database to message broker

    schema

event storming

event-driven I/O

@EventHandlerMethod annotation

events.

    See Domain events.

@EventSubscriber annotation

Eventuate framework2nd3rd

  and updating aggregates with the AggregateRepository class

  defining aggregate commands

  defining aggregates with ReflectiveMutableCommandProcessingAggregate class

  defining domain events

  subscribing to domain events

Eventuate Local event store

  consuming events by subscribing to event broker

  event relay propagates events from database to message broker

  schema

Eventuate Tram2nd

Eventuate Tram Saga framework

Exception tracking pattern2nd3rd

Express framework

external API patterns

  API gateway2nd3rd4th

  API gateway implementation

    using GraphQL

    using Netflix Zuul

    using off-the-shelf products/services

    using Spring Cloud Gateway

  API gateway pattern2nd3rd4th

    benefits of

    design issues

    drawbacks of

    Netflix example

    overview of

  Backends for frontends2nd3rd

  design issues

    browser-based JavaScript applications

    FTGO mobile client

    third-party applications

    web applications

  externalized configuration

    pull-based

    push-based

Externalized Configuration pattern2nd

edge functions

Elastic Beanstalk

Elasticsearch

@EnableGateway annotation

end-to-end tests

  designing

  running

  writing

Enterprise Service Bus (ESB)

entities, unit tests for

Entity object, DDD

enums

ESB (Enterprise Service Bus)

event.

    See Domain events.



event handlers

  events generated by AWS services

  idempotent

  unit tests for

event message

event publishing

  Asynchronous messaging pattern2nd3rd

  domain events

    consuming

    defined

    event enrichment

    generating and publishing

    identifying

    reasons for

  event sourcing2nd

  traditional persistence and

  using polling

  using transaction log tailing

event sourcing

  audit logging

  benefits of

    avoids O/R impedance mismatch problem

    preserves aggregate history

    reliable domain event publishing

    time machine for developers

  concurrent updates and optimistic locking

  drawbacks of

    complexity

    deleting data

    evolving events

    learning curve

    querying event store

  event publishing

    using polling

    using transaction log tailing

  evolving domain events

    event schema evolution

    managing schema changes through upcasting

  idempotent message processing

    with NoSQL-based event store

    with RDBMS-based event store

  overview of

    aggregate methods required to generate events

    event sourcing-based Order aggregate

    events representing state changes

    persisting aggregates using events

  sagas and

    creating orchestration-based saga

    implementing choreography-based sagas using event sourcing

    implementing event sourcing-based saga participant

    implementing saga orchestrators using event sourcing

  snapshots and performance improvement

  trouble with traditional persistence

    audit logging

    event publishing bolted to business logic

    lack of aggregate history

    Object-Relational impedance mismatch

Event Store

event store implementation

  Eventuate client framework for Java

    AggregateRepository class

    defining aggregate commands

    defining aggregates with ReflectiveMutableCommandProcessingAggregate class

    defining domain events

    subscribing to domain events

  Eventuate Local event store

    consuming events by subscribing to event broker

    event relay propagates events from database to message broker

    schema

event storming

event-driven I/O

@EventHandlerMethod annotation

events.

    See Domain events.

@EventSubscriber annotation

Eventuate framework2nd3rd

  and updating aggregates with the AggregateRepository class

  defining aggregate commands

  defining aggregates with ReflectiveMutableCommandProcessingAggregate class

  defining domain events

  subscribing to domain events

Eventuate Local event store

  consuming events by subscribing to event broker

  event relay propagates events from database to message broker

  schema

Eventuate Tram2nd

Eventuate Tram Saga framework

Exception tracking pattern2nd3rd

Express framework

external API patterns

  API gateway2nd3rd4th

  API gateway implementation

    using GraphQL

    using Netflix Zuul

    using off-the-shelf products/services

    using Spring Cloud Gateway

  API gateway pattern2nd3rd4th

    benefits of

    design issues

    drawbacks of

    Netflix example

    overview of

  Backends for frontends2nd3rd

  design issues

    browser-based JavaScript applications

    FTGO mobile client

    third-party applications

    web applications

  externalized configuration

    pull-based

    push-based

Externalized Configuration pattern2nd

F

F

Factory object, DDD

fault isolation

feature flags

feature toggles

filter expression

filter parameter

find() operation

findAvailableRestaurants() query operation

findCustomerContactInfo() method

findOrder() operation2nd

findOrderHistory() query operation2nd

  defining index for

  implementing

FindRestaurantRequestHandler class

Fission framework

Fluentd

Flume

fold operation



FTGO application

  API design issues for mobile client

  changing monolith to interact with Delivery Service

  component tests for Order Service

  deploying with Kubernetes

    API gateway

    Restaurant Service

    service meshes

    zero-downtime deployments

  microservice architecture of

  monolithic architecture of

ftgo-db-secret

FtgoGraphQLClient class

functional decomposition

fuzzy boundaries

Factory object, DDD

fault isolation

feature flags

feature toggles

filter expression

filter parameter

find() operation

findAvailableRestaurants() query operation

findCustomerContactInfo() method

findOrder() operation2nd

findOrderHistory() query operation2nd

  defining index for

  implementing

FindRestaurantRequestHandler class

Fission framework

Fluentd

Flume

fold operation



FTGO application

  API design issues for mobile client

  changing monolith to interact with Delivery Service

  component tests for Order Service

  deploying with Kubernetes

    API gateway

    Restaurant Service

    service meshes

    zero-downtime deployments

  microservice architecture of

  monolithic architecture of

ftgo-db-secret

FtgoGraphQLClient class

functional decomposition

fuzzy boundaries

G

G

GDPR (General Data Protection Regulation)

generalization pattern

GET REST endpoint

getDelayedOrders() method

getOrderDetails() query



Gherkin

  executing specifications using Cucumber

  writing acceptance tests

Go Kit

god classes

GoLang (Go language)2nd

Google Cloud functions

graph-based schema

GraphQL2nd

  connecting schema to data

  defining schema

  executing queries

  integrating Apollo GraphQL server with Express

  load optimization using batching and caching

  writing client

gRPC

GDPR (General Data Protection Regulation)

generalization pattern

GET REST endpoint

getDelayedOrders() method

getOrderDetails() query



Gherkin

  executing specifications using Cucumber

  writing acceptance tests

Go Kit

god classes

GoLang (Go language)2nd

Google Cloud functions

graph-based schema

GraphQL2nd

  connecting schema to data

  defining schema

  executing queries

  integrating Apollo GraphQL server with Express

  load optimization using batching and caching

  writing client

gRPC

H

H

handleHttpRequest() method

handleRequest() method

health check2nd

Health check API pattern2nd

  implementing endpoint

  invoking endpoint

hexagonal architecture2nd

high-level design patterns

Honeybadger

HttpServletResponse

Humble, Jez

handleHttpRequest() method

handleRequest() method

health check2nd

Health check API pattern2nd

  implementing endpoint

  invoking endpoint

hexagonal architecture2nd

high-level design patterns

Honeybadger

HttpServletResponse

Humble, Jez

I

idempotent message processing2nd

  CQRS views

  event sourcing-based saga participant

  with NoSQL-based event store

  with RDBMS-based event store

idempotentUpdate() method

IDL (interface definition language)

-ilities2nd3rd

Implementation view

inbound adapters2nd

infrastructure patterns

init system, Linux

in-memory security context

instrumentation libraries

integration glue

  designing API for

  for Delayed Delivery Service2nd

    CustomerContactInfoRepository interface

    design of API

    how Delivery Service accesses FTGO data

    how FTGO accesses data

    publishing and consuming Order and Restaurant domain events

  how monolith publishes and subscribes to domain events

  implementing anti-corruption layer

  picking interaction style and IPC mechanism

integration tests

  asynchronous request/response interactions

    example contract

    tests for asynchronous request/response interaction

  persistence integration tests

  publish/subscribe-style interactions

    contract for publishing OrderCreated event

    tests for Order History Service

    tests for Order Service

  REST-based request/response style interactions

    example contract

    tests for API gateway OrderServiceProxy

    tests for Order Service

interaction styles2nd

  asynchronous

  one-way notifications

  publish/async responses

  publish/subscribe

  request/response and asynchronous request/response

  selecting

interface definition language (IDL)

invariants

IPC (interprocess communication)2nd3rd

  overview of

    defining APIs

    evolving APIs

    interaction styles

    message formats

  using asynchronous Messaging pattern

    competing receivers and message ordering

    creating API specification

    duplicate messages

    improving availability

    interaction styles

    libraries and frameworks for

    message brokers

    overview of

    transactional messaging

  using synchronous Remote procedure invocation pattern

    Circuit breaker pattern

    gRPC

    REST

    service discovery

Istio

  deploying services

  Envoy proxy

  service meshes

idempotent message processing2nd

  CQRS views

  event sourcing-based saga participant

  with NoSQL-based event store

  with RDBMS-based event store

idempotentUpdate() method

IDL (interface definition language)

-ilities2nd3rd

Implementation view

inbound adapters2nd

infrastructure patterns

init system, Linux

in-memory security context

instrumentation libraries

integration glue

  designing API for

  for Delayed Delivery Service2nd

    CustomerContactInfoRepository interface

    design of API

    how Delivery Service accesses FTGO data

    how FTGO accesses data

    publishing and consuming Order and Restaurant domain events

  how monolith publishes and subscribes to domain events

  implementing anti-corruption layer

  picking interaction style and IPC mechanism

integration tests

  asynchronous request/response interactions

    example contract

    tests for asynchronous request/response interaction

  persistence integration tests

  publish/subscribe-style interactions

    contract for publishing OrderCreated event

    tests for Order History Service

    tests for Order Service

  REST-based request/response style interactions

    example contract

    tests for API gateway OrderServiceProxy

    tests for Order Service

interaction styles2nd

  asynchronous

  one-way notifications

  publish/async responses

  publish/subscribe

  request/response and asynchronous request/response

  selecting

interface definition language (IDL)

invariants

IPC (interprocess communication)2nd3rd

  overview of

    defining APIs

    evolving APIs

    interaction styles

    message formats

  using asynchronous Messaging pattern

    competing receivers and message ordering

    creating API specification

    duplicate messages

    improving availability

    interaction styles

    libraries and frameworks for

    message brokers

    overview of

    transactional messaging

  using synchronous Remote procedure invocation pattern

    Circuit breaker pattern

    gRPC

    REST

    service discovery

Istio

  deploying services

  Envoy proxy

  service meshes

J

J

java -jar command

Jenkins

JSESSIONID cookie

JSON message

JUL (java.util.logging)

JWT (JSON Web Token)2nd

java -jar command

Jenkins

JSESSIONID cookie

JSON message

JUL (java.util.logging)

JWT (JSON Web Token)2nd

K

K

Kafka

key condition expression

Kibana



Kitchen Service

  business logic

  Ticket aggregate

KitchenServiceCommandHandler class

KitchenServiceProxy class

Kong package

kubectl apply command

kubectl apply -f command

Kubernetes

  deploying API gateway

  deploying Restaurant Service

  overview of

    architecture

    key concepts

  service meshes

    deploying services

    deploying v2 of Consumer Service

    Istio

    routing production traffic to v2

    routing rules to route to v1 version

    routing test traffic to v2

  zero-downtime deployments

Kafka

key condition expression

Kibana



Kitchen Service

  business logic

  Ticket aggregate

KitchenServiceCommandHandler class

KitchenServiceProxy class

Kong package

kubectl apply command

kubectl apply -f command

Kubernetes

  deploying API gateway

  deploying Restaurant Service

  overview of

    architecture

    key concepts

  service meshes

    deploying services

    deploying v2 of Consumer Service

    Istio

    routing production traffic to v2

    routing rules to route to v1 version

    routing test traffic to v2

  zero-downtime deployments

L

L

Lagom

lambda functions2nd

  benefits of

  deploying using Serverless framework

  developing

  drawbacks of

  invoking

    defining scheduled lambda functions

    handling events generated by AWS services

    handling HTTP requests

    using web service request

Language-specific packaging format pattern

  benefits of

    efficient resource utilization

    fast deployment

  drawbacks of

    automatically determining where to place service instances

    lack of encapsulation of technology stack

    lack of isolation

    no ability to constrain resources consumed

latency

layered architectural style

layered file system

lead time2nd

lines of code (LOC) application

LinkedIn Databus

Linkerd

livenessProbe

LoadBalancer service

LOC (lines of code) application

Log aggregation pattern2nd3rd

  log aggregation infrastructure

  log generation

log4j

Logback

Logical view

LoginHandler2nd

Logstash

loose coupling2nd

lost updates

Lagom

lambda functions2nd

  benefits of

  deploying using Serverless framework

  developing

  drawbacks of

  invoking

    defining scheduled lambda functions

    handling events generated by AWS services

    handling HTTP requests

    using web service request

Language-specific packaging format pattern

  benefits of

    efficient resource utilization

    fast deployment

  drawbacks of

    automatically determining where to place service instances

    lack of encapsulation of technology stack

    lack of isolation

    no ability to constrain resources consumed

latency

layered architectural style

layered file system

lead time2nd

lines of code (LOC) application

LinkedIn Databus

Linkerd

livenessProbe

LoadBalancer service

LOC (lines of code) application

Log aggregation pattern2nd3rd

  log aggregation infrastructure

  log generation

log4j

Logback

Logical view

LoginHandler2nd

Logstash

loose coupling2nd

lost updates

M

M

MAJOR part, Semvers

makeContextWithDependencies() function

manual sidecar injection

Martin, Robert C.

master machine

mean time to recover

Memento pattern

message brokers2nd

  benefits and drawbacks of

  implementing message channels using

  overview of

message buffering

message channels2nd

message handler adapter class

message handlers, unit tests for

message identifier

message ordering

message sender adapter class

messaging.

    See Asynchronous messaging pattern.

Messaging style patterns.

    See Asynchronous messaging pattern.

metrics collection

Micro framework

micrometer-registry-prometheus library

microservice architecture2nd3rd

  as form of modularity

  benefits of

    continuous delivery and deployment of large, complex applications

    fault isolation improvement

    independently scalable services

    new technology experimentation and adoption

    small, easily maintained services

  defining

    decomposition guidelines

    defining service APIs

    identifying system operations

    obstacles to decomposing an application into services

    service definition with Decompose by business capability pattern

    service definition with Decompose by sub-domain pattern

  drawbacks of

    adoption timing

    challenge of finding right services

    complex distributed systems

    deployment coordination

  each service has own database

  FTGO application

  loose coupling, defined

  not silver bullet

  relationships between process, organization, and

    human side of adopting microservices

    software development and delivery organization

    software development and delivery process

  relative unimportance of size of service

  role of shared libraries

  scale cube

    X-axis scaling

    Y-axis scaling

    Z-axis scaling

  service-oriented architecture versus

  services, defined

  software architecture

    4+1 view model of

    definition of

    relevance of

  transaction management

    maintaining data consistency

    need for distributed transactions

    trouble with distributed transactions

Microservice chassis pattern2nd

  service meshes

  using

MINOR part, Semvers

Mixer

Mobile API module

Mockito

mocks

modularity, microservice architecture as form of

Mono abstraction

monolithic architecture2nd

  benefits of

  causes of monolithic hell

    intimidation due to complexity

    long and arduous path from commit to deployment

    reliability challenges

    scaling challenges

    slow development

    technology stack obsolescence

  FTGO monolithic architecture

multiply() method

MyBATIS

MAJOR part, Semvers

makeContextWithDependencies() function

manual sidecar injection

Martin, Robert C.

master machine

mean time to recover

Memento pattern

message brokers2nd

  benefits and drawbacks of

  implementing message channels using

  overview of

message buffering

message channels2nd

message handler adapter class

message handlers, unit tests for

message identifier

message ordering

message sender adapter class

messaging.

    See Asynchronous messaging pattern.

Messaging style patterns.

    See Asynchronous messaging pattern.

metrics collection

Micro framework

micrometer-registry-prometheus library

microservice architecture2nd3rd

  as form of modularity

  benefits of

    continuous delivery and deployment of large, complex applications

    fault isolation improvement

    independently scalable services

    new technology experimentation and adoption

    small, easily maintained services

  defining

    decomposition guidelines

    defining service APIs

    identifying system operations

    obstacles to decomposing an application into services

    service definition with Decompose by business capability pattern

    service definition with Decompose by sub-domain pattern

  drawbacks of

    adoption timing

    challenge of finding right services

    complex distributed systems

    deployment coordination

  each service has own database

  FTGO application

  loose coupling, defined

  not silver bullet

  relationships between process, organization, and

    human side of adopting microservices

    software development and delivery organization

    software development and delivery process

  relative unimportance of size of service

  role of shared libraries

  scale cube

    X-axis scaling

    Y-axis scaling

    Z-axis scaling

  service-oriented architecture versus

  services, defined

  software architecture

    4+1 view model of

    definition of

    relevance of

  transaction management

    maintaining data consistency

    need for distributed transactions

    trouble with distributed transactions

Microservice chassis pattern2nd

  service meshes

  using

MINOR part, Semvers

Mixer

Mobile API module

Mockito

mocks

modularity, microservice architecture as form of

Mono abstraction

monolithic architecture2nd

  benefits of

  causes of monolithic hell

    intimidation due to complexity

    long and arduous path from commit to deployment

    reliability challenges

    scaling challenges

    slow development

    technology stack obsolescence

  FTGO monolithic architecture

multiply() method

MyBATIS

N

N

Netflix Falcor

Netflix Hystrix

Netflix Zuul

Netflix, as API gateway

network latency

network timeouts

NodePort service

nodes2nd

nonblocking I/O

nonfunctional requirements

non-key attributes



NoSQL-based event store

  creating saga orchestrator when using

  idempotent message processing when using

  SQL versus

notePickedUp() method

Netflix Falcor

Netflix Hystrix

Netflix Zuul

Netflix, as API gateway

network latency

network timeouts

NodePort service

nodes2nd

nonblocking I/O

nonfunctional requirements

non-key attributes



NoSQL-based event store

  creating saga orchestrator when using

  idempotent message processing when using

  SQL versus

notePickedUp() method

O

O

O/R (Object-Relational) impedance mismatch2nd

OAuth 2.0 protocol

object-oriented design pattern

object-oriented programming (OOP)

Object-Relational (O/R) impedance mismatch2nd

observability

observability patterns

  Application metrics

  Audit logging

  Distributed tracing

  Exception tracking

  Health check API

  Log aggregation2nd

observable services

  Application metrics pattern

    collecting service-level metrics

    delivering metrics to metrics service

  Audit logging pattern

    adding code to business logic

    aspect-oriented programming

    event sourcing

  Distributed tracing pattern

    distributed tracing server

    instrumentation libraries

  Exception tracking pattern

  Health check API pattern

    implementing endpoint

    invoking endpoint

  Log aggregation pattern

    log generation

    logging aggregation infrastructure

ole-based authorization

one-size-fits-all (OSFA)

one-to-many interaction

one-to-one interaction

one-way notifications2nd

one-way notification-style API

OOP (object-oriented programming)

opaque tokens

Openwhisk

optimistic locking

Optimistic Offline Lock pattern

orchestration2nd

orchestration-based sagas

  benefits and drawbacks of

  creating

  implementing Create Order saga

  implementing using event sourcing

  modeling saga orchestrators as state machines

  transactional messaging and

Order aggregate

  event sourcing-based

  methods

  state machine

  structure of

Order domain events, publishing and consuming

Order History Service



Order Service

  business logic

    Order aggregate

    OrderService class

  consumer-driven contract integration tests for

  consumer-driven contract tests for

  OrderCommandHandlers class

  OrderService class

  OrderServiceConfiguration class

OrderCommandHandlers class

OrderConfiguration class

OrderCreated event

OrderDetailsRequestHandler

OrderHandlers class

OrderHistoryDaoDynamoDb class

  addOrder() method

  findOrderHistory() method

  idempotentUpdate() method

  notePickedUp() method

OrderHistoryEventHandlers module

OrderService class2nd3rd

OrderServiceComponentTestStepDefinitions class

OrderServiceConfiguration class

OrderServiceProxy

OSFA (one-size-fits-all)

outbound adapters2nd3rd

outstanding requests

O/R (Object-Relational) impedance mismatch2nd

OAuth 2.0 protocol

object-oriented design pattern

object-oriented programming (OOP)

Object-Relational (O/R) impedance mismatch2nd

observability

observability patterns

  Application metrics

  Audit logging

  Distributed tracing

  Exception tracking

  Health check API

  Log aggregation2nd

observable services

  Application metrics pattern

    collecting service-level metrics

    delivering metrics to metrics service

  Audit logging pattern

    adding code to business logic

    aspect-oriented programming

    event sourcing

  Distributed tracing pattern

    distributed tracing server

    instrumentation libraries

  Exception tracking pattern

  Health check API pattern

    implementing endpoint

    invoking endpoint

  Log aggregation pattern

    log generation

    logging aggregation infrastructure

ole-based authorization

one-size-fits-all (OSFA)

one-to-many interaction

one-to-one interaction

one-way notifications2nd

one-way notification-style API

OOP (object-oriented programming)

opaque tokens

Openwhisk

optimistic locking

Optimistic Offline Lock pattern

orchestration2nd

orchestration-based sagas

  benefits and drawbacks of

  creating

  implementing Create Order saga

  implementing using event sourcing

  modeling saga orchestrators as state machines

  transactional messaging and

Order aggregate

  event sourcing-based

  methods

  state machine

  structure of

Order domain events, publishing and consuming

Order History Service



Order Service

  business logic

    Order aggregate

    OrderService class

  consumer-driven contract integration tests for

  consumer-driven contract tests for

  OrderCommandHandlers class

  OrderService class

  OrderServiceConfiguration class

OrderCommandHandlers class

OrderConfiguration class

OrderCreated event

OrderDetailsRequestHandler

OrderHandlers class

OrderHistoryDaoDynamoDb class

  addOrder() method

  findOrderHistory() method

  idempotentUpdate() method

  notePickedUp() method

OrderHistoryEventHandlers module

OrderService class2nd3rd

OrderServiceComponentTestStepDefinitions class

OrderServiceConfiguration class

OrderServiceProxy

OSFA (one-size-fits-all)

outbound adapters2nd3rd

outstanding requests

P

P

pagination parameter

partition key

Passport framework

PATCH part, Semvers

patterns and pattern language

  by name

    3rd party registration

    Access token

    Aggregate

    Anti-corruption layer

    API composition

    API gateway

    Application metrics

    Audit logging

    Backends for frontends

    Circuit breaker

    Client-side discovery

    Command query responsibility segregation

    Consumer-driven contract test

    Consumer-side contract test

    Decompose by business capability

    Decompose by subdomain

    Deploy a service as a container

    Deploy a service as a VM

    Distributed tracing

    Domain event

    Domain model

    Event sourcing

    Exception tracking

    Externalized configuration

    Health check API

    Language-specific packaging format

    Log aggregation

    Messaging

    Microservice architecture

    Microservice chassis

    Monolithic architecture

    Polling publisher

    Remote procedure invocation

    Saga

    Self registration

    Serverless deployment

    Server-side discovery

    Service component test

    Service mesh

    Sidecar

    Strangler application

    Transaction log tailing

    Transaction script

    Transactional outbox

  groups of patterns

    communication patterns

    data consistency patterns

    for automated testing of services

    for decomposing applications into services

    for handling cross-cutting concerns

    for querying data

    observability patterns

    security patterns

    service deployment patterns

  sections of patterns

    forces

    related patterns

    resulting context

pending state



persistence

  persisting aggregates using events

  traditional approach to

    audit logging

    event publishing bolted to business logic

    lack of aggregate history

    object-relational impedance mismatch

persistence integration tests

Persistence layer

pessimistic view countermeasure

pickup action

Pilot

pivot transaction2nd

pods

point-to-point channel

policy enforcement

polling

Polling publisher pattern

ports

pre-commit tests stage

predecessor pattern

Presentation layer

presentation logic

primary key-based queries

Process view

process() method2nd

production-ready service development

  configurable services

    pull-based externalized configuration

    push-based externalized configuration

  Microservice chassis pattern

    service meshes

    using

  observable services

    Application metrics pattern

    Audit logging pattern

    Distributed tracing pattern

    Exception tracking pattern

    Health check API pattern

    Log aggregation pattern

  secure services

    handling authentication in API gateway

    handling authorization

    in traditional monolithic application

    using JWTs to pass user identity and roles

    using OAuth 2.0

Prometheus

properties, graph-based schema

Protocol Buffers

provider service

proxy classes

proxy interface

pseudonymization

Public API module

publish() method

publish/async responses



publish/subscribe-style interaction

  implementing

  integration tests for

    contract for publishing OrderCreated event

    tests for Order History Service

    tests for Order Service

publish-subscribe channel

pull model of externalized configuration2nd

push model of externalized configuration2nd

pagination parameter

partition key

Passport framework

PATCH part, Semvers

patterns and pattern language

  by name

    3rd party registration

    Access token

    Aggregate

    Anti-corruption layer

    API composition

    API gateway

    Application metrics

    Audit logging

    Backends for frontends

    Circuit breaker

    Client-side discovery

    Command query responsibility segregation

    Consumer-driven contract test

    Consumer-side contract test

    Decompose by business capability

    Decompose by subdomain

    Deploy a service as a container

    Deploy a service as a VM

    Distributed tracing

    Domain event

    Domain model

    Event sourcing

    Exception tracking

    Externalized configuration

    Health check API

    Language-specific packaging format

    Log aggregation

    Messaging

    Microservice architecture

    Microservice chassis

    Monolithic architecture

    Polling publisher

    Remote procedure invocation

    Saga

    Self registration

    Serverless deployment

    Server-side discovery

    Service component test

    Service mesh

    Sidecar

    Strangler application

    Transaction log tailing

    Transaction script

    Transactional outbox

  groups of patterns

    communication patterns

    data consistency patterns

    for automated testing of services

    for decomposing applications into services

    for handling cross-cutting concerns

    for querying data

    observability patterns

    security patterns

    service deployment patterns

  sections of patterns

    forces

    related patterns

    resulting context

pending state



persistence

  persisting aggregates using events

  traditional approach to

    audit logging

    event publishing bolted to business logic

    lack of aggregate history

    object-relational impedance mismatch

persistence integration tests

Persistence layer

pessimistic view countermeasure

pickup action

Pilot

pivot transaction2nd

pods

point-to-point channel

policy enforcement

polling

Polling publisher pattern

ports

pre-commit tests stage

predecessor pattern

Presentation layer

presentation logic

primary key-based queries

Process view

process() method2nd

production-ready service development

  configurable services

    pull-based externalized configuration

    push-based externalized configuration

  Microservice chassis pattern

    service meshes

    using

  observable services

    Application metrics pattern

    Audit logging pattern

    Distributed tracing pattern

    Exception tracking pattern

    Health check API pattern

    Log aggregation pattern

  secure services

    handling authentication in API gateway

    handling authorization

    in traditional monolithic application

    using JWTs to pass user identity and roles

    using OAuth 2.0

Prometheus

properties, graph-based schema

Protocol Buffers

provider service

proxy classes

proxy interface

pseudonymization

Public API module

publish() method

publish/async responses



publish/subscribe-style interaction

  implementing

  integration tests for

    contract for publishing OrderCreated event

    tests for Order History Service

    tests for Order Service

publish-subscribe channel

pull model of externalized configuration2nd

push model of externalized configuration2nd

Q

Q

quality attributes2nd3rd

quality of service2nd

queries

query arguments

query() operation2nd

querying patterns

  API composition pattern2nd3rd4th

    benefits and drawbacks of

    design issues

    findOrder() query operation, 2nd

    overview of

  CQRS pattern2nd3rd4th5th6th

    benefits of

    drawbacks of

    motivations for using

    overview of

quality attributes2nd3rd

quality of service2nd

queries

query arguments

query() operation2nd

querying patterns

  API composition pattern2nd3rd4th

    benefits and drawbacks of

    design issues

    findOrder() query operation, 2nd

    overview of

  CQRS pattern2nd3rd4th5th6th

    benefits of

    drawbacks of

    motivations for using

    overview of

R

R

RabbitMQ

rate limiting



RDBMS-based event store

  creating saga orchestrator when using

  idempotent message processing with

reactive programming model

readinessProbe2nd

receiving port interface

reduce operation

refactoring

  application modernization

  demonstrating value

  designing how service and monolith collaborate

    authentication and authorization

    data consistency

    integration glue

  extracting delivery management

    changing FTGO monolith to interact with Delivery Service

    designing Delivery Service domain model

    designing Delivery Service integration glue

    existing delivery functionality

    overview of Delivery Service

  implementing new features as services

    design for Delayed Delivery Service

    integration glue for Delayed Delivery Service

  minimizing changes

  overview of

  reasons for

  strategies for

    extracting business capabilities into services

    implementing new features as services

    separating presentation tier from backend

  technical deployment infrastructure



Refactoring to microservices patterns

  Anti-corruption layer

  Strangler application

ReflectiveMutableCommandProcessingAggregate class

Refresh Token concept

Releasing services



Reliable communications pattern

  Circuit breaker2nd

Remote procedure invocation (RPI) pattern

  Circuit breaker pattern

    developing robust RPI proxies

    recovering from unavailable services

  gRPC

  REST

    benefits and drawbacks of

    fetching multiple resources in single request

    mapping operations to HTTP verbs

    REST maturity model

    specifying REST APIs

  service discovery

    overview of

    using application-level service discovery patterns

    using platform-provided service discovery patterns

reply channel header

Repository object, DDD

request attribute

request logging

request/async response-style API

request/response interactions

  asynchronous

  integration tests for REST-based

RequestHandler interface

reread value countermeasure

Resource Server concept

REST

  benefits and drawbacks of

  fetching multiple resources in single request

  mapping operations to HTTP verbs

  REST maturity model

  specifying REST APIs

Rest Assured Mock MVC

Restaurant domain events



Restaurant Service

  creating services

  deploying

  design of

    AbstractAutowiringHttpRequestHandler class

    AbstractHttpHandler class

    FindRestaurantRequestHandler class

REST-based request/response style interactions, integration tests for

  example contract

  tests for API gateway OrderServiceProxy

  tests for Order Service

RESTful services

  deploying lambda functions using Serverless framework

  design of Restaurant Service

  packaging service as ZIP file

retriable transactions2nd3rd

revise() method

RabbitMQ

rate limiting



RDBMS-based event store

  creating saga orchestrator when using

  idempotent message processing with

reactive programming model

readinessProbe2nd

receiving port interface

reduce operation

refactoring

  application modernization

  demonstrating value

  designing how service and monolith collaborate

    authentication and authorization

    data consistency

    integration glue

  extracting delivery management

    changing FTGO monolith to interact with Delivery Service

    designing Delivery Service domain model

    designing Delivery Service integration glue

    existing delivery functionality

    overview of Delivery Service

  implementing new features as services

    design for Delayed Delivery Service

    integration glue for Delayed Delivery Service

  minimizing changes

  overview of

  reasons for

  strategies for

    extracting business capabilities into services

    implementing new features as services

    separating presentation tier from backend

  technical deployment infrastructure



Refactoring to microservices patterns

  Anti-corruption layer

  Strangler application

ReflectiveMutableCommandProcessingAggregate class

Refresh Token concept

Releasing services



Reliable communications pattern

  Circuit breaker2nd

Remote procedure invocation (RPI) pattern

  Circuit breaker pattern

    developing robust RPI proxies

    recovering from unavailable services

  gRPC

  REST

    benefits and drawbacks of

    fetching multiple resources in single request

    mapping operations to HTTP verbs

    REST maturity model

    specifying REST APIs

  service discovery

    overview of

    using application-level service discovery patterns

    using platform-provided service discovery patterns

reply channel header

Repository object, DDD

request attribute

request logging

request/async response-style API

request/response interactions

  asynchronous

  integration tests for REST-based

RequestHandler interface

reread value countermeasure

Resource Server concept

REST

  benefits and drawbacks of

  fetching multiple resources in single request

  mapping operations to HTTP verbs

  REST maturity model

  specifying REST APIs

Rest Assured Mock MVC

Restaurant domain events



Restaurant Service

  creating services

  deploying

  design of

    AbstractAutowiringHttpRequestHandler class

    AbstractHttpHandler class

    FindRestaurantRequestHandler class

REST-based request/response style interactions, integration tests for

  example contract

  tests for API gateway OrderServiceProxy

  tests for Order Service

RESTful services

  deploying lambda functions using Serverless framework

  design of Restaurant Service

  packaging service as ZIP file

retriable transactions2nd3rd

revise() method

S

S

Saas (Software-as-a-Service)

saga orchestration package

Saga pattern

SagaOrchestratorCreated event

SagaOrchestratorUpdated event

SagaReplyRequested pseudo event

sagas2nd3rd4th5th6th

  coordinating

    choreography-based sagas

    orchestration-based sagas

  Create Order saga

    CreateOrderSaga orchestrator

    CreateOrderSagaState class

    Eventuate Tram Saga framework

    KitchenServiceProxy class

  creating orchestration-based saga

    with a NoSQL-based event store

    with RDBMS-based event store

  implementing choreography-based sagas using event sourcing

  implementing event sourcing-based saga participant

  implementing saga orchestrators using event sourcing

    persisting using event sourcing

    processing replies exactly once

    sending command messages reliably

  lack of isolation

    anomalies caused by

    countermeasures for handling

  Order Service

    OrderCommandHandlers class

    OrderService class

    OrderServiceConfiguration class

  transaction management

    maintaining data consistency

    need for distributed transactions

    trouble with distributed transactions

  unit tests for

SATURN conference

save() method

scalability

scale cube

  X-axis scaling

  Y-axis scaling

  Z-axis scaling

secure services

  authentication in API gateway

  authorization

  in traditional monolithic application

  using JWTs to pass user identity and roles

  using OAuth 2.0

security patterns

  Access token2nd3rd

SELECT statements

Self registration pattern

semantic lock

semantic lock countermeasure

sending port interface

Serverless deployment with lambda

  benefits of lambda functions

  developing lambda functions

  drawbacks of lambda functions

  invoking lambda functions

    defining scheduled lambda functions

    handling events generated by AWS services

    handling HTTP requests

    using web service request

  overview of

Serverless framework

server-side discovery pattern

service API definition

  assigning system operations to services

  determining APIs required to support collaboration between services

Service as a container pattern

  benefits of

  Docker

    building Docker images

    pushing Docker images to registry

    running Docker containers

  drawbacks of

Service as a virtual machine pattern

  benefits of

    mature cloud infrastructure

    service instances are isolated

    VM image encapsulates technology stack

  drawbacks of

    less-efficient resource utilization

    relatively slow deployments

    system administration overhead

service component test2nd

service configurability

service definition

  Decompose by business capability pattern

    decomposition

    identifying business capabilities

    purpose of business capabilities

  Decompose by sub-domain pattern

service deployment patterns

service discovery

  3rd party registration2nd

  Client-side discovery

  overview of

  Self registration

  Server-side discovery

service meshes2nd

  deploying v2 of Consumer Service

  Istio

  routing production traffic to v2

  routing rules to route to v1 version

  routing test traffic to v2

Service object, DDD

service() method

service-oriented architecture (SOA)

SES (Simple Email Service)

SessionBasedSecurityInterceptor

sessions

setUp() method

sharded channel

Shiro

Sidecar pattern

Simple Email Service (SES)

Single persistence layer

Single presentation layer

Single Responsibility Principle (SRP)

smart pipes

snapshots2nd

SOA (service-oriented architecture)

sociable unit test

software architecture

  4+1 view model of

  definition of

  relevance of

software pattern

Software-as-a-Service (SaaS)

solitary unit test

SoundCloud

specialization pattern

Spring Cloud Contract

Spring Cloud Gateway

  ApiGatewayApplication class

  OrderConfiguration class

  OrderHandlers class

  OrderService class

Spring Mock MVC

Spring Security

SPRING_APPLICATION_JSON variable

SQL

SRP (Single Responsibility Principle)



state machines

  modeling saga orchestrators as

  Order aggregate

Strangler Application pattern

Strategy pattern

stubs2nd

successor pattern

SUT (system under test)

synchronous I/O model

synchronous interactions

system operations

  assigning to services

  creating high-level domain model

  defining

  identifying

system under test (SUT)

System.getenv() method

Saas (Software-as-a-Service)

saga orchestration package

Saga pattern

SagaOrchestratorCreated event

SagaOrchestratorUpdated event

SagaReplyRequested pseudo event

sagas2nd3rd4th5th6th

  coordinating

    choreography-based sagas

    orchestration-based sagas

  Create Order saga

    CreateOrderSaga orchestrator

    CreateOrderSagaState class

    Eventuate Tram Saga framework

    KitchenServiceProxy class

  creating orchestration-based saga

    with a NoSQL-based event store

    with RDBMS-based event store

  implementing choreography-based sagas using event sourcing

  implementing event sourcing-based saga participant

  implementing saga orchestrators using event sourcing

    persisting using event sourcing

    processing replies exactly once

    sending command messages reliably

  lack of isolation

    anomalies caused by

    countermeasures for handling

  Order Service

    OrderCommandHandlers class

    OrderService class

    OrderServiceConfiguration class

  transaction management

    maintaining data consistency

    need for distributed transactions

    trouble with distributed transactions

  unit tests for

SATURN conference

save() method

scalability

scale cube

  X-axis scaling

  Y-axis scaling

  Z-axis scaling

secure services

  authentication in API gateway

  authorization

  in traditional monolithic application

  using JWTs to pass user identity and roles

  using OAuth 2.0

security patterns

  Access token2nd3rd

SELECT statements

Self registration pattern

semantic lock

semantic lock countermeasure

sending port interface

Serverless deployment with lambda

  benefits of lambda functions

  developing lambda functions

  drawbacks of lambda functions

  invoking lambda functions

    defining scheduled lambda functions

    handling events generated by AWS services

    handling HTTP requests

    using web service request

  overview of

Serverless framework

server-side discovery pattern

service API definition

  assigning system operations to services

  determining APIs required to support collaboration between services

Service as a container pattern

  benefits of

  Docker

    building Docker images

    pushing Docker images to registry

    running Docker containers

  drawbacks of

Service as a virtual machine pattern

  benefits of

    mature cloud infrastructure

    service instances are isolated

    VM image encapsulates technology stack

  drawbacks of

    less-efficient resource utilization

    relatively slow deployments

    system administration overhead

service component test2nd

service configurability

service definition

  Decompose by business capability pattern

    decomposition

    identifying business capabilities

    purpose of business capabilities

  Decompose by sub-domain pattern

service deployment patterns

service discovery

  3rd party registration2nd

  Client-side discovery

  overview of

  Self registration

  Server-side discovery

service meshes2nd

  deploying v2 of Consumer Service

  Istio

  routing production traffic to v2

  routing rules to route to v1 version

  routing test traffic to v2

Service object, DDD

service() method

service-oriented architecture (SOA)

SES (Simple Email Service)

SessionBasedSecurityInterceptor

sessions

setUp() method

sharded channel

Shiro

Sidecar pattern

Simple Email Service (SES)

Single persistence layer

Single presentation layer

Single Responsibility Principle (SRP)

smart pipes

snapshots2nd

SOA (service-oriented architecture)

sociable unit test

software architecture

  4+1 view model of

  definition of

  relevance of

software pattern

Software-as-a-Service (SaaS)

solitary unit test

SoundCloud

specialization pattern

Spring Cloud Contract

Spring Cloud Gateway

  ApiGatewayApplication class

  OrderConfiguration class

  OrderHandlers class

  OrderService class

Spring Mock MVC

Spring Security

SPRING_APPLICATION_JSON variable

SQL

SRP (Single Responsibility Principle)



state machines

  modeling saga orchestrators as

  Order aggregate

Strangler Application pattern

Strategy pattern

stubs2nd

successor pattern

SUT (system under test)

synchronous I/O model

synchronous interactions

system operations

  assigning to services

  creating high-level domain model

  defining

  identifying

system under test (SUT)

System.getenv() method

T

T

telemetry

test cases

test double

test pyramid

test quadrant

@Test shouldCalculateTotal() method

@Test shouldCreateOrder() method

test suites

testing

  acceptance tests

    defining

    writing using Gherkin

  challenge of

    consumer contract testing

    consumer contract testing for messaging APIs

    Spring Cloud Contract

  component tests

    for FTGO Order Service

    in-process component tests

    out-of-process component tests

  Consumer-driven contract test2nd

  Consumer-side contract test2nd

  deployment pipeline

  end-to-end tests

    designing

    running

    writing

  integration tests

    contract tests for asynchronous request/response interactions

    persistence integration tests

    publish/subscribe-style interactions

    REST-based request/response style interactions

  overview of

    automated tests

    different types of tests

    mocks and stubs

    test pyramid

    test quadrant

  Service component test2nd

  unit tests

    for controllers

    for domain services

    for entities

    for event and message handlers

    for sagas

    for value objects

testuser header

text-based message formats

Ticket aggregate

  behavior of

  KitchenService domain service

  KitchenServiceCommandHandler class

  structure of Ticket class

tight coupling

timeouts

TLS (Transport Layer Security)

tokens

Traefik

traffic management

transaction log tailing2nd

transaction management

  maintaining data consistency

  need for distributed transactions

  trouble with distributed transactions.

    See also sagas.

Transaction script pattern

@Transactional annotation

transactional messaging

  Polling publisher pattern

  Transaction log tailing pattern

  Transactional outbox pattern2nd

  using database table as message queue

transparent tokens

Transport Layer Security (TLS)

two-phase commit (2PC)

telemetry

test cases

test double

test pyramid

test quadrant

@Test shouldCalculateTotal() method

@Test shouldCreateOrder() method

test suites

testing

  acceptance tests

    defining

    writing using Gherkin

  challenge of

    consumer contract testing

    consumer contract testing for messaging APIs

    Spring Cloud Contract

  component tests

    for FTGO Order Service

    in-process component tests

    out-of-process component tests

  Consumer-driven contract test2nd

  Consumer-side contract test2nd

  deployment pipeline

  end-to-end tests

    designing

    running

    writing

  integration tests

    contract tests for asynchronous request/response interactions

    persistence integration tests

    publish/subscribe-style interactions

    REST-based request/response style interactions

  overview of

    automated tests

    different types of tests

    mocks and stubs

    test pyramid

    test quadrant

  Service component test2nd

  unit tests

    for controllers

    for domain services

    for entities

    for event and message handlers

    for sagas

    for value objects

testuser header

text-based message formats

Ticket aggregate

  behavior of

  KitchenService domain service

  KitchenServiceCommandHandler class

  structure of Ticket class

tight coupling

timeouts

TLS (Transport Layer Security)

tokens

Traefik

traffic management

transaction log tailing2nd

transaction management

  maintaining data consistency

  need for distributed transactions

  trouble with distributed transactions.

    See also sagas.

Transaction script pattern

@Transactional annotation

transactional messaging

  Polling publisher pattern

  Transaction log tailing pattern

  Transactional outbox pattern2nd

  using database table as message queue

transparent tokens

Transport Layer Security (TLS)

two-phase commit (2PC)

U

U

Ubiquitous Language

unit tests

  for controllers

  for domain services

  for entities

  for event and message handlers

  for sagas

  for value objects

upcasting

UPDATE statement

update() method2nd3rd

UpdateItem() operation



USERINFO cookie

  LoginHandler and

  mapping to Authorization header

Ubiquitous Language

unit tests

  for controllers

  for domain services

  for entities

  for event and message handlers

  for sagas

  for value objects

upcasting

UPDATE statement

update() method2nd3rd

UpdateItem() operation



USERINFO cookie

  LoginHandler and

  mapping to Authorization header

V

V

Value object, DDD

value objects, unit tests for

version file countermeasure

VIP (virtual IP) address

VirtualService

VMs (virtual machines)

Value object, DDD

value objects, unit tests for

version file countermeasure

VIP (virtual IP) address

VirtualService

VMs (virtual machines)

W

W

WAR (Web Application Archive) file

WebSockets

WAR (Web Application Archive) file

WebSockets

X

X

XML message

XML message

Z

Z

ZeroMQ

Zipkin

ZeroMQ

Zipkin

图表列表

List of Figures

第 1 章.逃离巨石地狱

Chapter 1. Escaping monolithic hell

图 1.1.FTGO 应用程序具有六边形架构。它由业务逻辑组成,这些逻辑被实现 UI 和与外部系统的接口,例如用于支付、消息传递和电子邮件的移动应用程序和云服务。

图 1.2.一个铁板一块的地狱案例。大型 FTGO 开发团队将他们的更改提交到单个源代码存储库。 从代码提交到生产的道路漫长而艰巨,并且涉及手动测试。FTGO 应用程序庞大、复杂、 不可靠,且难以维护。

图 1.3.缩放多维数据集定义了三种不同的应用程序扩展方法:X 轴缩放对请求进行负载均衡 多个相同的实例;Z 轴缩放根据请求的属性路由请求;Y 轴在功能上分解 一个应用程序进入服务。

图 1.4.X 轴扩展在负载均衡器后面运行整体式应用程序的多个相同实例。

图 1.5.Z 轴缩放在路由器后面运行整体应用程序的多个相同实例,该路由器基于 on 请求属性。每个实例负责数据的子集。

图 1.6.Y 轴缩放将应用程序拆分为一组服务。每个服务都负责一个特定的功能。 使用 X 轴缩放(可能使用 Z 轴缩放)对服务进行缩放。

图 1.7.FTGO 应用程序基于微服务架构版本的一些服务。API Gateway 路由 从移动应用程序到服务的请求。这些服务通过 API 进行协作。

图 1.8.基于微服务的 FTGO 应用程序由一组松散耦合的服务组成。每个团队开发、测试、 并独立部署他们的服务。

图 1.9.模式之间不同类型关系的可视化表示:后继模式解决 应用前身模式产生的问题;两个或多个模式可以是同一问题的替代解决方案; 一个模式可以是另一个模式的特化;并且可以对解决同一区域中问题的模式进行分组, 或广义的。

图 1.10.微服务架构模式语言的高级视图,显示了不同的问题区域 模式解决了。左侧是应用程序架构模式:整体式架构和微服务架构。 所有其他模式组都解决了因选择 Microservice 架构模式而导致的问题。

图 1.11.有两种分解模式:按业务能力分解,它围绕业务组织服务 功能,以及 Decompose by subdomain,它围绕域驱动设计 (DDD) 子域组织服务。

图 1.12.通信模式的五组

图 1.13.由于每个服务都有自己的数据库,因此您必须使用 Saga 模式来保持数据一致性 服务业。

图 1.14.由于每个服务都有自己的数据库,因此您必须使用其中一种查询模式来检索分散的数据 跨多个服务。

图 1.15.用于部署微服务的几种模式。传统方法是以特定于语言的 打包格式。有两种现代方法可以部署服务。第一个选项将服务部署为 VM 或容器。这 第二种是无服务器方法。您只需上传服务的代码,无服务器平台就会运行它。您应该使用 服务部署平台,这是一个用于部署和管理服务的自动化自助服务平台。

图 1.16.快速、频繁和可靠地交付大型复杂应用程序需要 DevOps 的组合,而 DevOps 包括持续交付/部署、小型自治团队和微服务架构。

Figure 1.1. The FTGO application has a hexagonal architecture. It consists of business logic surrounded by adapters that implement UIs and interface with external systems, such as mobile applications and cloud services for payments, messaging, and email.

Figure 1.2. A case of monolithic hell. The large FTGO developer team commits their changes to a single source code repository. The path from code commit to production is long and arduous and involves manual testing. The FTGO application is large, complex, unreliable, and difficult to maintain.

Figure 1.3. The scale cube defines three separate ways to scale an application: X-axis scaling load balances requests across multiple, identical instances; Z-axis scaling routes requests based on an attribute of the request; Y-axis functionally decomposes an application into services.

Figure 1.4. X-axis scaling runs multiple, identical instances of the monolithic application behind a load balancer.

Figure 1.5. Z-axis scaling runs multiple identical instances of the monolithic application behind a router, which routes based on a request attribute. Each instance is responsible for a subset of the data.

Figure 1.6. Y-axis scaling splits the application into a set of services. Each service is responsible for a particular function. A service is scaled using X-axis scaling and, possibly, Z-axis scaling.

Figure 1.7. Some of the services of the microservice architecture-based version of the FTGO application. An API Gateway routes requests from the mobile applications to services. The services collaborate via APIs.

Figure 1.8. The microservices-based FTGO application consists of a set of loosely coupled services. Each team develops, tests, and deploys their services independently.

Figure 1.9. The visual representation of different types of relationships between the patterns: a successor pattern solves a problem created by applying the predecessor pattern; two or more patterns can be alternative solutions to the same problem; one pattern can be a specialization of another pattern; and patterns that solve problems in the same area can be grouped, or generalized.

Figure 1.10. A high-level view of the Microservice architecture pattern language showing the different problem areas that the patterns solve. On the left are the application architecture patterns: Monolithic architecture and Microservice architecture. All the other groups of patterns solve problems that result from choosing the Microservice architecture pattern.

Figure 1.11. There are two decomposition patterns: Decompose by business capability, which organizes services around business capabilities, and Decompose by subdomain, which organizes services around domain-driven design (DDD) subdomains.

Figure 1.12. The five groups of communication patterns

Figure 1.13. Because each service has its own database, you must use the Saga pattern to maintain data consistency across services.

Figure 1.14. Because each service has its own database, you must use one of the querying patterns to retrieve data scattered across multiple services.

Figure 1.15. Several patterns for deploying microservices. The traditional approach is to deploy services in a language-specific packaging format. There are two modern approaches to deploying services. The first deploys services as VM or containers. The second is the serverless approach. You simply upload the service’s code and the serverless platform runs it. You should use a service deployment platform, which is an automated, self-service platform for deploying and managing services.

Figure 1.16. The rapid, frequent, and reliable delivery of large, complex applications requires a combination of DevOps, which includes continuous delivery/deployment, small, autonomous teams, and the microservice architecture.

第 2 章.分解策略

Chapter 2. Decomposition strategies

图 2.1.4+1 视图模型使用四个视图来描述应用程序的体系结构,以及演示如何操作的方案 每个视图中的元素协作处理请求。

图 2.2.六边形体系结构的示例,它由业务逻辑和一个或多个通信适配器组成 与外部系统。业务逻辑具有一个或多个端口。入站适配器,用于处理来自外部系统的请求, 调用入站端口。出站适配器实现出站端口,并调用外部系统。

图 2.3.FTGO 应用程序可能的微服务架构。它由许多服务组成。

图 2.4.服务具有封装实现的 API。API 定义由客户端调用的操作。 有两种类型的操作:命令更新数据和查询检索数据。当其数据发生更改时,服务会发布 客户端可以订阅的事件。

图 2.5.定义应用程序微服务架构的三步过程

图 2.6.系统操作是使用两步过程从应用程序的要求派生的。第一步是 创建高级域模型。第二步是定义系统操作,这些操作是根据 domain 模型。

图 2.7.FTGO 域模型中的关键类

图 2.8.将 FTGO 业务能力映射到服务。功能层次结构中各个级别的能力包括 映射到 Services。

图 2.9.从子域到服务:FTGO 应用程序域的每个子域都映射到一个服务,该服务具有 自己的域模型。

图 2.10.秩序神级臃肿,责任众多。

图 2.11.Delivery Service 域模型

图 2.12.Kitchen Service 域模型

图 2.13.Order Service 域模型

Figure 2.1. The 4+1 view model describes an application’s architecture using four views, along with scenarios that show how the elements within each view collaborate to handle requests.

Figure 2.2. An example of a hexagonal architecture, which consists of the business logic and one or more adapters that communicate with external systems. The business logic has one or more ports. Inbound adapters, which handled requests from external systems, invoke an inbound port. An outbound adapter implements an outbound port, and invokes an external system.

Figure 2.3. A possible microservice architecture for the FTGO application. It consists of numerous services.

Figure 2.4. A service has an API that encapsulates the implementation. The API defines operations, which are invoked by clients. There are two types of operations: commands update data, and queries retrieve data. When its data changes, a service publishes events that clients can subscribe to.

Figure 2.5. A three-step process for defining an application’s microservice architecture

Figure 2.6. System operations are derived from the application’s requirements using a two-step process. The first step is to create a high-level domain model. The second step is to define the system operations, which are defined in terms of the domain model.

Figure 2.7. The key classes in the FTGO domain model

Figure 2.8. Mapping FTGO business capabilities to services. Capabilities at various levels of the capability hierarchy are mapped to services.

Figure 2.9. From subdomains to services: each subdomain of the FTGO application domain is mapped to a service, which has its own domain model.

Figure 2.10. The Order god class is bloated with numerous responsibilities.

Figure 2.11. The Delivery Service domain model

Figure 2.12. The Kitchen Service domain model

Figure 2.13. The Order Service domain model

第 3 章.微服务架构中的进程间通信

Chapter 3. Interprocess communication in a microservice architecture

图 3.1.客户端的业务逻辑调用由 RPI 代理适配器类实现的接口。RPI 代理 class 向服务发出请求。RPI 服务器适配器类通过调用服务的业务来处理请求 逻辑。

图 3.2.API 网关必须保护自身免受无响应服务(如 Order Service)的影响。

图 3.3.API 网关使用 API 组合实现 GET /orders/{orderId} 终端节点。它调用了多个服务, 聚合其响应,并将响应发送到移动应用程序。实现终端节点的代码必须具有策略 用于处理它调用的每个服务的故障。

图 3.4.服务实例具有动态分配的 IP 地址。

图 3.5.服务注册表会跟踪服务实例。客户端查询服务注册中心以查找网络 可用服务实例的位置。

图 3.6.该平台负责服务注册、发现和请求路由。服务实例已注册 由注册商通过 Service Registry 进行。每个服务都有一个网络位置、一个 DNS 名称/虚拟 IP 地址。客户使 对服务网络位置的请求。路由器查询服务注册表,并在 可用的服务实例。

图 3.7.发送方中的业务逻辑调用发送端口接口,该接口由消息发送方适配器实现。 消息发送方通过消息通道向接收方发送消息。消息通道是消息传递的抽象 基础设施。调用接收方中的消息处理程序适配器来处理消息。它调用接收端口 interface 实现。

图 3.8.通过在请求中包含回复通道和消息标识符来实现异步请求/响应 消息。接收方处理消息并将回复发送到指定的回复通道。

图 3.9.服务的异步 API 由消息通道以及 command、reply 和 event 消息类型组成。

图 3.10.无代理架构中的服务直接通信,而基于代理的架构中的服务 通过消息代理进行通信。

图 3.11.通过使用分片(分区)消息通道,在保留消息顺序的同时扩展使用者。发件人 在消息中包含分片键。消息代理将消息写入由分片键确定的分片。信息 broker 将每个分区分配给复制接收器的一个实例。

图 3.12.Consumer 通过将处理后的消息 ID 记录在数据库表中来检测并丢弃重复消息。 如果之前已经处理过消息,则 INSERT 到 PROCESSED_MESSAGES 表中将失败。

图 3.13.服务通过将消息作为更新事务的一部分插入到 OUTBOX 表中来可靠地发布消息 数据库。Message Relay 读取 OUTBOX 表并将消息发布到消息代理。

图 3.14.服务通过挖掘数据库的事务日志来发布插入到 OUTBOX 表中的消息。

图 3.15.Order Service 使用 REST 调用其他服务。这很简单,但它需要所有服务 同时可用,这会降低 API 的可用性。

图 3.16.如果 FTGO 应用程序的服务使用异步消息传递进行通信,则 FTGO 应用程序具有更高的可用性 的同步调用。

图 3.17.Order Service 是自包含的,因为它具有 Consumer 和 Restaurant 数据的副本。

图 3.18.Order Service 在不调用任何其他服务的情况下创建 Order。然后,它会异步验证新的 通过与其他服务(包括 Consumer Service 和 Restaurant Service)交换消息来创建 Order。

Figure 3.1. The client’s business logic invokes an interface that is implemented by an RPI proxy adapter class. The RPI proxy class makes a request to the service. The RPI server adapter class handles the request by invoking the service’s business logic.

Figure 3.2. An API gateway must protect itself from unresponsive services, such as the Order Service.

Figure 3.3. The API gateway implements the GET /orders/{orderId} endpoint using API composition. It calls several services, aggregates their responses, and sends a response to the mobile app. The code that implements the endpoint must have a strategy for handling the failure of each service that it calls.

Figure 3.4. Service instances have dynamically assigned IP addresses.

Figure 3.5. The service registry keeps track of the service instances. Clients query the service registry to find network locations of available service instances.

Figure 3.6. The platform is responsible for service registration, discovery, and request routing. Service instances are registered with the service registry by the registrar. Each service has a network location, a DNS name/virtual IP address. A client makes a request to the service’s network location. The router queries the service registry and load balances requests across the available service instances.

Figure 3.7. The business logic in the sender invokes a sending port interface, which is implemented by a message sender adapter. The message sender sends a message to a receiver via a message channel. The message channel is an abstraction of messaging infrastructure. A message handler adapter in the receiver is invoked to handle the message. It invokes the receiving port interface implemented by the receiver’s business logic.

Figure 3.8. Implementing asynchronous request/response by including a reply channel and message identifier in the request message. The receiver processes the message and sends the reply to the specified reply channel.

Figure 3.9. A service’s asynchronous API consists of message channels and command, reply, and event message types.

Figure 3.10. The services in brokerless architecture communicate directly, whereas the services in a broker-based architecture communicate via a message broker.

Figure 3.11. Scaling consumers while preserving message ordering by using a sharded (partitioned) message channel. The sender includes the shard key in the message. The message broker writes the message to a shard determined by the shard key. The message broker assigns each partition to an instance of the replicated receiver.

Figure 3.12. A consumer detects and discards duplicate messages by recording the IDs of processed messages in a database table. If a message has been processed before, the INSERT into the PROCESSED_MESSAGES table will fail.

Figure 3.13. A service reliably publishes a message by inserting it into an OUTBOX table as part of the transaction that updates the database. The Message Relay reads the OUTBOX table and publishes the messages to a message broker.

Figure 3.14. A service publishes messages inserted into the OUTBOX table by mining the database’s transaction log.

Figure 3.15. The Order Service invokes other services using REST. It’s straightforward, but it requires all the services to be simultaneously available, which reduces the availability of the API.

Figure 3.16. The FTGO application has higher availability if its services communicate using asynchronous messaging instead of synchronous calls.

Figure 3.17. Order Service is self-contained because it has replicas of the consumer and restaurant data.

Figure 3.18. Order Service creates an order without invoking any other service. It then asynchronously validates the newly created Order by exchanging messages with other services, including Consumer Service and Restaurant Service.

第 4 章.使用 saga 管理事务

Chapter 4. Managing transactions with sagas

图 4.1.createOrder() 操作会更新多个服务中的数据。它必须使用一种机制来维护数据一致性 跨这些服务。

图 4.2.使用 saga 创建 Order。createOrder() 操作由由本地事务组成的 saga 实现 在多项服务中。

图 4.3.当 saga 的某个步骤因违反业务规则而失败时,该 saga 必须显式撤消所做的更新 通过前面的步骤执行补偿交易。

图 4.4.使用 choreography 实施 Create Order Saga。saga 参与者通过交换事件进行通信。

图 4.5.当消费者的信用卡授权失败时,Create Order Saga 中的事件序列。会计学 服务发布 Credit Card Authorization Failed 事件,这会导致 Kitchen 服务拒绝 Ticket 和 Order 拒绝订单的 service。

图 4.6.使用编排实现 Create Order Saga。Order Service 实现了一个 saga 编排器,该编排器调用 使用异步请求/响应的 Saga 参与者。

图 4.7.Create Order Saga 的状态机模型

图 4.8.saga 由三种不同类型的事务组成:可补偿事务(可回滚)、 因此,有一个补偿事务、一个 pivot 事务(即 Saga 的通过/不通过点)和可重试的事务(即 是不需要回滚并保证完成的事务。

图 4.9.Order Service 的设计及其传奇

图 4.10.OrderService 创建并更新 Orders,调用 OrderRepository 来持久化 Orders,并创建 saga,包括 CreateOrderSaga 的

图 4.11.OrderService 的 Sagas(如 Create Order Saga)是使用 Eventuate Tram Saga 框架实现的。

图 4.12.Eventuate Tram Saga 是一个用于编写 saga 编排器和 saga 参与者的框架。

图 4.13.OrderService 创建 Create Order Saga 实例时的事件序列

图 4.14.SagaManager 收到来自 saga 参与者的回复消息时的事件序列

图 4.15.OrderCommandHandlers 为各种 Order Service 发送的命令实现命令处理程序 传说。

Figure 4.1. The createOrder() operation updates data in several services. It must use a mechanism to maintain data consistency across those services.

Figure 4.2. Creating an Order using a saga. The createOrder() operation is implemented by a saga that consists of local transactions in several services.

Figure 4.3. When a step of a saga fails because of a business rule violation, the saga must explicitly undo the updates made by previous steps by executing compensating transactions.

Figure 4.4. Implementing the Create Order Saga using choreography. The saga participants communicate by exchanging events.

Figure 4.5. The sequence of events in the Create Order Saga when the authorization of the consumer’s credit card fails. Accounting Service publishes the Credit Card Authorization Failed event, which causes Kitchen Service to reject the Ticket, and Order Service to reject the Order.

Figure 4.6. Implementing the Create Order Saga using orchestration. Order Service implements a saga orchestrator, which invokes the saga participants using asynchronous request/response.

Figure 4.7. The state machine model for the Create Order Saga

Figure 4.8. A saga consists of three different types of transactions: compensatable transactions, which can be rolled back, so have a compensating transaction, a pivot transaction, which is the saga’s go/no-go point, and retriable transactions, which are transactions that don’t need to be rolled back and are guaranteed to complete.

Figure 4.9. The design of the Order Service and its sagas

Figure 4.10. OrderService creates and updates Orders, invokes the OrderRepository to persist Orders, and creates sagas, including the CreateOrderSaga.

Figure 4.11. The OrderService’s sagas, such as Create Order Saga, are implemented using the Eventuate Tram Saga framework.

Figure 4.12. Eventuate Tram Saga is a framework for writing both saga orchestrators and saga participants.

Figure 4.13. The sequence of events when OrderService creates an instance of Create Order Saga

Figure 4.14. The sequence of events when the SagaManager receives a reply message from a saga participant

Figure 4.15. OrderCommandHandlers implements command handlers for the commands that are sent by the various Order Service sagas.

第 5 章.在微服务架构中设计业务逻辑

Chapter 5. Designing business logic in a microservice architecture

图 5.1.Order Service 具有六边形体系结构。它由业务逻辑和一个或多个适配器组成,这些适配器 与外部应用程序和其他服务的接口。

图 5.2.将业务逻辑组织为事务脚本。在典型的基于事务脚本的设计中,一组类 实现行为,另一个 set 存储 state。交易脚本被组织成类,这些类通常没有 州。这些脚本使用数据类,这些数据类通常没有行为。

图 5.3.将业务逻辑组织为域模型。大多数业务逻辑由具有 状态和行为。

图 5.4.传统的域模型是一个由互连类组成的 Web。它没有明确指定 业务对象,例如 Consumer 和 Order。

图 5.5.将域模型构建为一组聚合使边界明确。

图 5.6.聚合之间的引用是按主键而不是按对象引用进行的。Order 聚合具有 Consumer 和 Restaurant 聚合的 ID。在聚合中,对象彼此引用。

图 5.7.事务只能创建或更新单个聚合,因此应用程序使用 saga 更新多个聚合。 saga 的每个步骤都会创建或更新一个聚合。

图 5.8.另一种设计定义包含 Customer 和 Order 类的 Customer 聚合。此设计使 一个以原子方式更新 Consumer 及其一个或多个 Order 的应用程序。

图 5.9.Order Service 业务逻辑的基于聚合的设计

图 5.10.这是持续了几个小时的 Event Storm 研讨会的结果。便笺是事件,它们是 沿时间线布局;命令,表示用户操作;和 aggregates,它们发出事件以响应命令。

图 5.11.Kitchen Service 的设计

图 5.12.Order Service 的设计。它有一个用于管理订单的 REST API。它与 其他服务。

图 5.13.Order 聚合的设计,它由 Order 聚合根和各种值对象组成。

图 5.14.Order 聚合的状态机模型的一部分

Figure 5.1. The Order Service has a hexagonal architecture. It consists of the business logic and one or more adapters that interface with external applications and other services.

Figure 5.2. Organizing business logic as transaction scripts. In a typical transaction script–based design, one set of classes implements behavior and another set stores state. The transaction scripts are organized into classes that typically have no state. The scripts use data classes, which typically have no behavior.

Figure 5.3. Organizing business logic as a domain model. The majority of the business logic consists of classes that have state and behavior.

Figure 5.4. A traditional domain model is a web of interconnected classes. It doesn’t explicitly specify the boundaries of business objects, such as Consumer and Order.

Figure 5.5. Structuring a domain model as a set of aggregates makes the boundaries explicit.

Figure 5.6. References between aggregates are by primary key rather than by object reference. The Order aggregate has the IDs of the Consumer and Restaurant aggregates. Within an aggregate, objects have references to one another.

Figure 5.7. A transaction can only create or update a single aggregate, so an application uses a saga to update multiple aggregates. Each step of the saga creates or updates one aggregate.

Figure 5.8. An alternative design defines a Customer aggregate that contains the Customer and Order classes. This design enables an application to atomically update a Consumer and one or more of its Orders.

Figure 5.9. An aggregate-based design for the Order Service business logic

Figure 5.10. The result of an event-storming workshop that lasted a couple of hours. The sticky notes are events, which are laid out along a timeline; commands, which represent user actions; and aggregates, which emit events in response to a command.

Figure 5.11. The design of Kitchen Service

Figure 5.12. The design of the Order Service. It has a REST API for managing orders. It exchanges messages and events with other services via several message channels.

Figure 5.13. The design of the Order aggregate, which consists of the Order aggregate root and various value objects.

Figure 5.14. Part of the state machine model of the Order aggregate

第 6 章.使用事件溯源开发业务逻辑

Chapter 6. Developing business logic with event sourcing

图 6.1.传统的持久性方法将类映射到表,将对象映射到这些表中的行。

图 6.2.事件溯源将每个聚合保留为一系列事件。例如,基于 RDBMS 的应用程序可以将 事件在 EVENTS 表中。

图 6.3.当 Order 处于状态 S 时应用事件 E 必须将 Order 状态更改为 S'。事件必须包含数据 执行状态更改所必需的。

图 6.4.处理命令会生成事件,而不会更改聚合的状态。聚合的更新时间 应用事件。

图 6.5.事件溯源将更新聚合的方法拆分为 process() 方法,该方法接受命令并返回 events 和一个或多个 apply() 方法,这些方法接受事件并更新聚合。

图 6.6.由于事务 A 在事务 B 之后提交而跳过事件的情况。轮询看到 eventId=1020 然后跳过 eventId=1010。

图 6.7.使用快照无需加载所有事件,从而提高性能。应用程序只需要 加载快照及其之后发生的事件。

图 6.8.Customer Service 通过反序列化快照的 JSON,然后加载和应用 事件 #104 到 #106。

图 6.9.Eventuate Local 的架构。它由存储事件的事件数据库(例如 MySQL)组成, 一个事件代理(如 Apache Kafka),用于将事件交付给订阅者,以及一个事件中继,用于发布存储在 事件数据库添加到 Event Broker。

图 6.10.Eventuate Java 客户端框架提供的主要类和接口

图 6.11.在服务创建基于事件溯源的聚合后,使用事件处理程序可靠地创建 saga

图 6.12.基于事件溯源的 Accounting Service 如何参与 Create Order Saga

图 6.13.基于事件溯源的 saga 编排器如何向 saga 参与者发送命令

Figure 6.1. The traditional approach to persistence maps classes to tables and objects to rows in those tables.

Figure 6.2. Event sourcing persists each aggregate as a sequence of events. A RDBMS-based application can, for example, store the events in an EVENTS table.

Figure 6.3. Applying event E when the Order is in state S must change the Order state to S'. The event must contain the data necessary to perform the state change.

Figure 6.4. Processing a command generates events without changing the state of the aggregate. An aggregate is updated by applying an event.

Figure 6.5. Event sourcing splits a method that updates an aggregate into a process() method, which takes a command and returns events, and one or more apply() methods, which take an event and update the aggregate.

Figure 6.6. A scenario where an event is skipped because its transaction A commits after transaction B. Polling sees eventId=1020 and then later skips eventId=1010.

Figure 6.7. Using a snapshot improves performance by eliminating the need to load all events. An application only needs to load the snapshot and the events that occur after it.

Figure 6.8. The Customer Service recreates the Customer by deserializing the snapshot’s JSON and then loading and applying events #104 through #106.

Figure 6.9. The architecture of Eventuate Local. It consists of an event database (such as MySQL) that stores the events, an event broker (like Apache Kafka) that delivers events to subscribers, and an event relay that publishes events stored in the event database to the event broker.

Figure 6.10. The main classes and interfaces provided by the Eventuate client framework for Java

Figure 6.11. Using an event handler to reliably create a saga after a service creates an event sourcing-based aggregate

Figure 6.12. How the event sourcing-based Accounting Service participates in Create Order Saga

Figure 6.13. How an event sourcing-based saga orchestrator sends commands to saga participants

第 7 章.在微服务架构中实现查询

Chapter 7. Implementing queries in a microservice architecture

图 7.1.findOrder() 操作由 FTGO 前端模块调用,并返回 Order 的详细信息。

图 7.2.API 组合模式由一个 API 编辑器和两个或多个提供程序服务组成。API 编写器实现 通过查询提供程序并组合结果来查询。

图 7.3.使用 API 组合模式实现 findOrder()

图 7.4.在客户端中实现 API 组合。客户端查询提供商服务以检索数据。

图 7.5.在 API 网关中实施 API 组合。API 查询提供者服务以检索数据,结合 结果,并将响应返回给客户端。

图 7.6.将多个客户端和服务使用的查询操作作为独立服务实现。

图 7.7.API 组合无法有效地检索消费者的订单,因为某些提供商(如 Delivery Service、 不存储用于筛选的属性。

图 7.8.左侧是服务的非 CQRS 版本,右侧是 CQRS 版本。CQRS 重构了 service 导入到命令端和查询端模块中,这些模块具有单独的数据库。

图 7.9.Order History Service 的设计,它是一个查询端服务。它实现了 findOrderHistory() 查询 操作,它通过订阅多个其他服务发布的事件来维护数据库。

图 7.10.CQRS 视图模块的设计。事件处理程序更新视图数据库,该数据库由 Query API 查询 模块。

图 7.11.DeliveryPickedUp 和 DeliveryDelivered 事件被传递两次,这会导致视图中的 order 状态 暂时过时。

图 7.12.OrderHistoryService 的设计。OrderHistoryEventHandlers 更新数据库以响应事件。这 OrderHistoryQuery 模块通过查询数据库来实现查询操作。这两个模块使用 OrderHistoryDataAccess 模块访问数据库。

图 7.13.DynamoDB OrderHistory 表的初步结构

图 7.14.OrderHistory 表和索引的设计

Figure 7.1. The findOrder() operation is invoked by a FTGO frontend module and returns the details of an Order.

Figure 7.2. The API composition pattern consists of an API composer and two or more provider services. The API composer implements a query by querying the providers and combining the results.

Figure 7.3. Implementing findOrder() using the API composition pattern

Figure 7.4. Implementing API composition in a client. The client queries the provider services to retrieve the data.

Figure 7.5. Implementing API composition in the API gateway. The API queries the provider services to retrieve the data, combines the results, and returns a response to the client.

Figure 7.6. Implement a query operation used by multiple clients and services as a standalone service.

Figure 7.7. API composition can’t efficiently retrieve a consumer’s orders, because some providers, such as Delivery Service, don’t store the attributes used for filtering.

Figure 7.8. On the left is the non-CQRS version of the service, and on the right is the CQRS version. CQRS restructures a service into command-side and query-side modules, which have separate databases.

Figure 7.9. The design of Order History Service, which is a query-side service. It implements the findOrderHistory() query operation by querying a database, which it maintains by subscribing to events published by multiple other services.

Figure 7.10. The design of a CQRS view module. Event handlers update the view database, which is queried by the Query API module.

Figure 7.11. The DeliveryPickedUp and DeliveryDelivered events are delivered twice, which causes the order state in view to be temporarily out-of-date.

Figure 7.12. The design of OrderHistoryService. OrderHistoryEventHandlers updates the database in response to events. The OrderHistoryQuery module implements the query operations by querying the database. These two modules use the OrderHistoryDataAccess module to access the database.

Figure 7.13. Preliminary structure of the DynamoDB OrderHistory table

Figure 7.14. The design of the OrderHistory table and index

第 8 章.外部 API 模式

Chapter 8. External API patterns

图 8.1.FTGO 应用程序的服务及其客户端。有几种不同类型的客户端。有些在里面 防火墙和其他 cookie 都在外面。防火墙之外的用户通过性能较低的 Internet/移动设备访问服务 网络。防火墙内的那些客户端使用更高性能的 LAN。

图 8.2.客户端可以通过单个请求从整体式 FTGO 应用程序中检索订单详细信息。但是客户端 必须发出多个请求才能在微服务架构中检索相同的信息。

图 8.3.API 网关是从防火墙外部进行 API 调用的应用程序的单一入口点。

图 8.4.API 网关通常执行 API 组合,这使客户端(如移动设备)能够有效地检索 data 使用单个 API 请求。

图 8.5.API 网关具有分层模块化架构。每个客户端的 API 由单独的模块实现。 公共层实现所有 API 通用的功能,例如身份验证。

图 8.6.客户团队拥有他们的 API 模块。当他们更改客户端时,他们可以更改 API 模块,而不向 API 网关团队进行更改。

图 8.7.Backends for frontends 模式为每个客户端定义一个单独的 API 网关。每个客户团队都拥有他们的 API 网关。API Gateway 团队拥有公共层。

图 8.8.使用 Spring Cloud 网关构建的 API 网关的架构

图 8.9.API 网关的 API 由映射到服务的基于图形的架构组成。客户端发出查询 检索多个图形节点。基于图形的 API 框架通过从一个或多个检索数据来执行查询 服务业。

图 8.10.基于 GraphQL 的 FTGO API 网关的设计

图 8.11.GraphQL 通过递归调用 Query 中指定的字段的解析程序函数来执行查询 公文。首先,它执行查询的解析器,然后递归地调用 result 对象层次结构。

Figure 8.1. The FTGO application’s services and their clients. There are several different types of clients. Some are inside the firewall, and others are outside. Those outside the firewall access the services over the lower-performance internet/mobile network. Those clients inside the firewall use a higher-performance LAN.

Figure 8.2. A client can retrieve the order details from the monolithic FTGO application with a single request. But the client must make multiple requests to retrieve the same information in a microservice architecture.

Figure 8.3. The API gateway is the single entry point into the application for API calls from outside the firewall.

Figure 8.4. An API gateway often does API composition, which enables a client such as a mobile device to efficiently retrieve data using a single API request.

Figure 8.5. An API gateway has a layered modular architecture. The API for each client is implemented by a separate module. The common layer implements functionality common to all APIs, such as authentication.

Figure 8.6. A client team owns their API module. As they change the client, they can change the API module and not ask the API gateway team to make the changes.

Figure 8.7. The Backends for frontends pattern defines a separate API gateway for each client. Each client team owns their API gateway. An API gateway team owns the common layer.

Figure 8.8. The architecture of an API gateway built using Spring Cloud Gateway

Figure 8.9. The API gateway’s API consists of a graph-based schema that’s mapped to the services. A client issues a query that retrieves multiple graph nodes. The graph-based API framework executes the query by retrieving data from one or more services.

Figure 8.10. The design of the GraphQL-based FTGO API Gateway

Figure 8.11. GraphQL executes a query by recursively invoking the resolver functions for the fields specified in the Query document. First, it executes the resolver for the query, and then it recursively invokes the resolvers for the fields in the result object hierarchy.

第 9 章.测试微服务:第 1 部分

Chapter 9. Testing microservices: Part 1

图 9.1.测试的目标是验证被测系统的行为。SUT 可能小到一个类或 与整个应用程序一样大。

图 9.2.每个 Automated Test 都由一个 test 方法实现,该方法属于一个 test 类。测试包括四个阶段: setup,初始化测试 fixture,这是运行测试所需的一切;execute,用于调用 SUT;验证 验证测试结果;以及 teardown,用于清理测试夹具。

图 9.3.通过将依赖项替换为测试替身,可以单独测试 SUT。测试更简单、更快捷。

图 9.4.测试象限按两个维度对测试进行分类。第一个维度是测试是否面向业务 或面向技术。第二个问题是测试的目的是支持编程还是批评应用程序。

图 9.5.测试金字塔描述了您需要编写的每种测试类型的相对比例。随着您的晋升 Pyramid 中,您应该编写的测试越来越少。

图 9.6.FTGO 应用程序中的一些服务间通信。每个箭头都从使用者服务指向 producer 服务。

图 9.7.每个开发使用 Order Service API 的服务的团队都贡献一个 Contract 测试套件。测试 套件验证 API 是否符合消费者的期望。此测试套件以及其他团队贡献的测试套件, 由 Order Service 的部署管道运行。

图 9.8.API Gateway 团队编写合同。Order Service 团队使用这些合同来测试 Order Service 和 将它们发布到仓库。API Gateway 团队使用已发布的合同来测试 API Gateway。

图 9.9.Order Service 的示例部署管道。它由一系列阶段组成。预提交测试包括 由开发人员在提交其代码之前运行。其余阶段由自动化工具(如 Jenkins)执行 CI 服务器。

图 9.10.单元测试是金字塔的基础。它们运行速度快、易于编写且可靠。单独的单元测试 隔离测试类,使用 mock 或 stub 作为其依赖项。社交单元测试测试类及其依赖项。

图 9.11.类的职责决定了是使用单独单元测试还是社交单元测试。

Figure 9.1. The goal of a test is to verify the behavior of the system under test. An SUT might be as small as a class or as large as an entire application.

Figure 9.2. Each automated test is implemented by a test method, which belongs to a test class. A test consists of four phases: setup, which initializes the test fixture, which is everything required to run the test; execute, which invokes the SUT; verify, which verifies the outcome of the test; and teardown, which cleans up the test fixture.

Figure 9.3. Replacing a dependency with a test double enables the SUT to be tested in isolation. The test is simpler and faster.

Figure 9.4. The test quadrant categorizes tests along two dimensions. The first dimension is whether a test is business facing or technology facing. The second is whether the purpose of the test is to support programming or critique the application.

Figure 9.5. The test pyramid describes the relative proportions of each type of test that you need to write. As you move up the pyramid, you should write fewer and fewer tests.

Figure 9.6. Some of the interservice communication in the FTGO application. Each arrow points from a consumer service to a producer service.

Figure 9.7. Each team that develops a service that consumes Order Service’s API contributes a contract test suite. The test suite verifies that the API matches the consumer’s expectations. This test suite, along with those contributed by other teams, is run by Order Service’s deployment pipeline.

Figure 9.8. The API Gateway team writes the contracts. The Order Service team uses those contracts to test Order Service and publishes them to a repository. The API Gateway team uses the published contracts to test API Gateway.

Figure 9.9. An example deployment pipeline for Order Service. It consists of a series of stages. The pre-commit tests are run by the developer prior to committing their code. The remaining stages are executed by an automated tool, such as the Jenkins CI server.

Figure 9.10. Unit tests are the base of the pyramid. They’re fast running, easy to write, and reliable. A solitary unit test tests a class in isolation, using mocks or stubs for its dependencies. A sociable unit test tests a class and its dependencies.

Figure 9.11. The responsibilities of a class determine whether to use a solitary or sociable unit test.

第 10 章.测试微服务:第 2 部分

Chapter 10. Testing microservices: Part 2

图 10.1.集成测试必须验证服务是否可以与其客户端和依赖项通信。但是,而不是 测试整个服务,策略是测试实现通信的各个适配器类。

图 10.2.集成测试是单元测试之上的层。它们验证服务是否可以与其依赖项 其中包括基础设施服务(如数据库)和应用程序服务。

图 10.3.这些 Contract 用于验证 API 网关和订单服务符合合同。消费者端测试验证 OrderServiceProxy 是否调用 Order 正确服务。提供程序端测试验证 OrderController 是否正确实现了 REST API 端点。

图 10.4.这些 Contract 用于测试发布/订阅交互的两端。提供程序端测试验证 OrderDomainEventPublisher 发布向协定确认的事件。使用者端测试验证 OrderHistoryEventHandlers 使用合约中的示例事件。

图 10.5.这些 Contract 用于测试实现异步请求/响应每一端的适配器类 互动。提供程序端测试验证 KitchenServiceCommandHandler 是否处理命令并发回回复。这 使用者端测试验证 KitchenServiceProxy 发送的命令是否符合协定,以及它是否处理示例 合同的回复。

图 10.6.组件测试隔离地测试服务。它通常将存根用于服务的依赖项。

图 10.7.Order Service 的组件测试使用 Cucumber 测试框架来执行使用 Gherkin 验收测试 DSL。这些测试使用 Docker 来运行 Order Service 及其基础设施服务,例如 Apache Kafka 和 MySQL。

图 10.8.端到端测试位于测试金字塔的顶部。它们发育缓慢、易碎且耗时。你 应尽量减少端到端测试的数量。

Figure 10.1. Integration tests must verify that a service can communicate with its clients and dependencies. But rather than testing whole services, the strategy is to test the individual adapter classes that implement the communication.

Figure 10.2. Integration tests are the layer above unit tests. They verify that a service can communicate with its dependencies, which includes infrastructure services, such as the database, and application services.

Figure 10.3. The contracts are used to verify that the adapter classes on both sides of the REST-based communication between API Gateway and Order Service conform to the contract. The consumer-side tests verify that OrderServiceProxy invokes Order Service correctly. The provider-side tests verify that OrderController implements the REST API endpoints correctly.

Figure 10.4. The contracts are used to test both sides of the publish/subscribe interaction. The provider-side tests verify that OrderDomainEventPublisher publishes events that confirm to the contract. The consumer-side tests verify that OrderHistoryEventHandlers consume the example events from the contract.

Figure 10.5. The contracts are used to test the adapter classes that implement each side of the asynchronous request/response interaction. The provider-side tests verify that KitchenServiceCommandHandler handles commands and sends back replies. The consumer-side tests verify KitchenServiceProxy sends commands that conform to the contract, and that it handles the example replies from the contract.

Figure 10.6. A component test tests a service in isolation. It typically uses stubs for the service’s dependencies.

Figure 10.7. The component tests for Order Service use the Cucumber testing framework to execute tests scenarios written using Gherkin acceptance testing DSL. The tests use Docker to run Order Service along with its infrastructure services, such as Apache Kafka and MySQL.

Figure 10.8. End-to-end tests are at the top of the test pyramid. They are slow, brittle, and time consuming to develop. You should minimize the number of end-to-end tests.

第 11 章.开发生产就绪型 Service

Chapter 11. Developing production-ready services

图 11.1.FTGO 应用程序的客户端首先登录以获取会话令牌,该令牌通常是一个 Cookie。客户端 在它向应用程序发出的每个后续请求中包含会话令牌。

图 11.2.当 FTGO 应用程序的客户端发出登录请求时,Login Handler 对用户进行身份验证,初始化 会话用户信息,并返回会话令牌 Cookie,该 Cookie 可以安全地标识会话。接下来,当客户端 发出包含会话令牌的请求,SessionBasedSecurityInterceptor 从指定的 会话并建立安全上下文。请求处理程序(例如 OrderDetailsRequestHandler)检索用户信息 来自安全上下文。

图 11.3.API 网关对来自客户端的请求进行身份验证,并在它向其发出的请求中包含安全令牌 服务业。这些服务使用令牌获取有关委托人的信息。API 网关也可以使用安全性 token 作为会话令牌。

图 11.4.API 网关通过向 OAuth 2.0 身份验证发出密码授予请求来对 API 客户端进行身份验证 服务器。服务器返回一个访问令牌,API 网关将该令牌传递给服务。服务验证令牌的签名 并提取有关用户的信息,包括其身份和角色。

图 11.5.客户端通过将其凭证 POST 到 API 网关来登录。API 网关对凭证进行身份验证 使用 OAuth 2.0 身份验证服务器,并将访问令牌和刷新令牌作为 Cookie 返回。客户端包括这些 令牌。

图 11.6.Order History Service 使用 Apache Kafka 和 AWS DynamoDB。它需要配置每个服务的网络 位置、凭据等。

图 11.7.当部署基础结构创建 Order History Service 的实例时,它会设置环境变量 包含外部化配置。Order History Service 读取这些环境变量。

图 11.8.启动时,服务实例从配置服务器检索其配置属性。部署 infrastructure 提供用于访问 Configuration Server 的配置属性。

图 11.9.可观测性模式使开发人员和运维人员能够了解应用程序的行为,并且 排查问题。开发人员负责确保其服务是可观察的。运营负责 对于收集服务公开的信息的基础设施。

图 11.10.服务实现运行状况检查终端节点,该终端节点由部署基础设施定期调用 来确定 Service 实例的运行状况。

图 11.11.日志聚合基础设施将每个服务实例的日志运送到集中式日志记录服务器。 用户可以查看和搜索日志。他们还可以设置警报,当日志条目与搜索条件匹配时触发警报。

图 11.12.Zipkin 服务器显示 FTGO 应用程序如何处理由 API 网关路由到 Order 的请求 服务。每个请求都由一个跟踪表示。跟踪是一组 span。每个 Span (可以包含子 Span) 都是 调用服务。根据收集的详细信息级别,span 还可以表示操作的调用 在服务中。

图 11.13.每个服务(包括 API 网关)都使用一个检测库。插桩库分配的 每个外部请求的 ID,在服务之间传播跟踪状态,并将 span 报告到分布式跟踪服务器。

图 11.14.堆栈每个级别的指标都收集并存储在指标服务中,该服务提供可视化 和警报。

图 11.15.服务向异常跟踪服务报告异常,异常跟踪服务会删除重复的异常并提醒开发人员。 它具有用于查看和管理异常的 UI。

图 11.16.微服务机箱是一个框架,可处理许多问题,例如异常跟踪、日志记录、运行状况 检查、外部化配置和分布式跟踪。

图 11.17.进出服务的所有网络流量都流经服务网格。服务网格实现了各种 功能包括熔断、分布式跟踪、服务发现和负载均衡。实现的功能更少 通过微服务机箱。它还通过在服务之间使用基于 TLS 的 IPC 来保护进程间通信。

Figure 11.1. A client of the FTGO application first logs in to obtain a session token, which is often a cookie. The client includes the session token in each subsequent request it makes to the application.

Figure 11.2. When a client of the FTGO application makes a login request, Login Handler authenticates the user, initializes the session user information, and returns a session token cookie, which securely identifies the session. Next, when the client makes a request containing the session token, SessionBasedSecurityInterceptor retrieves the user information from the specified session and establishes the security context. Request handlers, such as OrderDetailsRequestHandler, retrieve the user information from the security context.

Figure 11.3. The API gateway authenticates requests from clients and includes a security token in the requests it makes to services. The services use the token to obtain information about the principal. The API gateway can also use the security token as a session token.

Figure 11.4. An API gateway authenticates an API client by making a Password Grant request to the OAuth 2.0 authentication server. The server returns an access token, which the API gateway passes to the services. A service verifies the token’s signature and extracts information about the user, including their identity and roles.

Figure 11.5. A client logs in by POSTing its credentials to the API gateway. The API gateway authenticates the credentials using the OAuth 2.0 authentication server and returns the access token and refresh token as cookies. A client includes these tokens in the requests it makes to the API gateway.

Figure 11.6. Order History Service uses Apache Kafka and AWS DynamoDB. It needs to be configured with each service’s network location, credentials, and so on.

Figure 11.7. When the deployment infrastructure creates an instance of Order History Service, it sets the environment variables containing the externalized configuration. Order History Service reads those environment variables.

Figure 11.8. On startup, a service instance retrieves its configuration properties from a configuration server. The deployment infrastructure provides the configuration properties for accessing the configuration server.

Figure 11.9. The observability patterns enable developers and operations to understand the behavior of an application and troubleshoot problems. Developers are responsible for ensuring that their services are observable. Operations are responsible for the infrastructure that collects the information exposed by the services.

Figure 11.10. A service implements a health check endpoint, which is periodically invoked by the deployment infrastructure to determine the health of the service instance.

Figure 11.11. The log aggregation infrastructure ships the logs of each service instance to a centralized logging server. Users can view and search the logs. They can also set up alerts, which are triggered when log entries match search criteria.

Figure 11.12. The Zipkin server shows how the FTGO application handles a request that’s routed by the API gateway to Order Service. Each request is represented by a trace. A trace is a set of spans. Each span, which can contain child spans, is the invocation of a service. Depending on the level of detail collected, a span can also represent the invocation of an operation inside a service.

Figure 11.13. Each service (including the API gateway) uses an instrumentation library. The instrumentation library assigns an ID to each external request, propagates tracing state between services, and reports spans to the distributed tracing server.

Figure 11.14. Metrics at every level of the stack are collected and stored in a metrics service, which provides visualization and alerting.

Figure 11.15. A service reports exceptions to an exception tracking service, which de-duplicates exceptions and alerts developers. It has a UI for viewing and managing exceptions.

Figure 11.16. A microservice chassis is a framework that handles numerous concerns, such as exception tracking, logging, health checks, externalized configuration, and distributed tracing.

Figure 11.17. All network traffic in and out of a service flows through the service mesh. The service mesh implements various functions including circuit breakers, distributed tracing, service discovery, and load balancing. Fewer functions are implemented by the microservice chassis. It also secures interprocess communication by using TLS-based IPC between services.

第 12 章.部署微服务

Chapter 12. Deploying microservices

图 12.1.重量级和长寿命的物理机器已经被越来越轻量级和短暂的机器所抽象化 技术。

图 12.2.生产环境的简化视图。它提供四个主要功能:服务管理支持 开发人员部署和管理他们的服务,运行时管理确保服务正在运行,监控可视化 服务行为并生成警报,请求路由将请求从用户路由到服务。

图 12.3.部署管道构建可执行 JAR 文件并将其部署到生产环境中。在生产环境中,每项服务 instance 是在安装了 JDK 或 JRE 的计算机上运行的 JVM。

图 12.4.在同一台计算机上部署多个服务实例。它们可能是同一服务或实例的实例 不同的服务。操作系统的开销在服务实例之间共享。每个服务实例都是一个单独的进程, 所以他们之间有一些孤立。

图 12.5.在同一 Web 容器或应用程序服务器上部署多个服务实例。它们可能是实例 同一服务或不同服务的实例。操作系统和运行时的开销在所有服务之间共享 实例。但是,由于服务实例位于同一进程中,因此它们之间没有隔离。

图 12.6.部署管道将服务打包为虚拟机映像,例如 EC2 AMI,其中包含所有内容 运行服务所需的,包括语言运行时。在运行时,每个服务实例都是一个 VM,例如 EC2 实例, 从该映像实例化。EC2 Elastic Load Balancer 将请求路由到实例。

图 12.7.容器由在隔离的沙箱中运行的一个或多个进程组成。通常会运行多个容器 在单台计算机上。容器共享操作系统。

图 12.8.服务打包为容器映像,该映像存储在注册表中。在运行时,该服务包括 从该映像实例化多个容器。容器通常在虚拟机上运行。单个 VM 通常会运行 多个容器。

图 12.9.Docker 编排框架将一组运行 Docker 的机器转换为资源集群。它分配 容器到机器。框架会尝试始终保持所需数量的正常运行的容器。

图 12.10.Kubernetes 集群由管理集群的主节点和运行服务的节点组成。开发 人员 部署管道通过 API 服务器与 Kubernetes 交互,该服务器与其他集群管理软件一起 在 master 上运行。应用程序容器在节点上运行。每个节点都运行一个 Kubelet,它管理应用程序容器、 以及 kube-proxy,它将应用程序请求直接作为代理或通过配置 iptables 间接路由到 Pod Linux 内核中内置的路由规则。

图 12.11.Istio 由一个控制平面和一个数据平面组成,控制平面的组件包括 Pilot 和 Mixer 由 Envoy 代理服务器组成。Pilot 从底层基础设施中提取有关已部署服务的信息 并配置数据平面。Mixer 执行配额等策略并收集遥测数据,并将其报告给监控 基础架构服务器。Envoy 代理服务器将流量路由到服务中。每个服务都有一个 Envoy 代理服务器 实例。

图 12.12.Consumer Service 的路由规则,将所有流量路由到 v1 Pod。它由 VirtualService、 将其流量路由到 v1 子集,以及 DestinationRule,将 v1 子集定义为标有 version 的 Pod: v1 的定义此规则后,您可以安全地部署新版本,而无需最初将任何流量路由到该版本。

图 12.13.将 Restaurant Service 部署为 AWS Lambda 函数。AWS API Gateway 将 HTTP 请求路由到 AWS Lambda 函数,这些函数由 Restaurant Service 定义的请求处理程序类实现。

图 12.14.基于 AWS Lambda 的 Restaurant Service 的设计。表示层由请求处理程序类、 实现 Lambda 函数。它们调用业务层,该层以传统样式编写,包括 一个服务类、一个实体和一个存储库。

图 12.15.请求处理程序类的设计。抽象超类实现依赖注入和 error 处理。

Figure 12.1. Heavyweight and long-lived physical machines have been abstracted away by increasingly lightweight and ephemeral technologies.

Figure 12.2. A simplified view of the production environment. It provides four main capabilities: service management enables developers to deploy and manage their services, runtime management ensures that the services are running, monitoring visualizes service behavior and generates alerts, and request routing routes requests from users to the services.

Figure 12.3. The deployment pipeline builds an executable JAR file and deploys it into production. In production, each service instance is a JVM running on a machine that has the JDK or JRE installed.

Figure 12.4. Deploying multiple service instances on the same machine. They might be instances of the same service or instances of different services. The overhead of the OS is shared among the service instances. Each service instance is a separate process, so there’s some isolation between them.

Figure 12.5. Deploying multiple services instances on the same web container or application server. They might be instances of the same service or instances of different services. The overhead of the OS and runtime is shared among all the service instances. But because the service instances are in the same process, there’s no isolation between them.

Figure 12.6. The deployment pipeline packages a service as a virtual machine image, such as an EC2 AMI, containing everything required to run the service, including the language runtime. At runtime, each service instance is a VM, such as an EC2 instance, instantiated from that image. An EC2 Elastic Load Balancer routes requests to the instances.

Figure 12.7. A container consists of one or more processes running in an isolated sandbox. Multiple containers usually run on a single machine. The containers share the operating system.

Figure 12.8. A service is packaged as a container image, which is stored in a registry. At runtime the service consists of multiple containers instantiated from that image. Containers typically run on virtual machines. A single VM will usually run multiple containers.

Figure 12.9. A Docker orchestration framework turns a set of machines running Docker into a cluster of resources. It assigns containers to machines. The framework attempts to keep the desired number of healthy containers running at all times.

Figure 12.10. A Kubernetes cluster consists of a master, which manages the cluster, and nodes, which run the services. Developers and the deployment pipeline interact with Kubernetes through the API server, which along with other cluster-management software runs on the master. Application containers run on nodes. Each node runs a Kubelet, which manages the application container, and a kube-proxy, which routes application requests to the pods, either directly as a proxy or indirectly by configuring iptables routing rules built into the Linux kernel.

Figure 12.11. Istio consists of a control plane, whose components include the Pilot and the Mixer, and a data plane, which consists of Envoy proxy servers. The Pilot extracts information about deployed services from the underlying infrastructure and configures the data plane. The Mixer enforces policies such as quotas and gathers telemetry, reporting it to the monitoring infrastructure servers. The Envoy proxy servers route traffic in and out of services. There’s one Envoy proxy server per service instance.

Figure 12.12. The routing rule for Consumer Service, which routes all traffic to the v1 pods. It consists of a VirtualService, which routes its traffic to the v1 subset, and a DestinationRule, which defines the v1 subset as the pods labeled with version: v1. Once you’ve defined this rule, you can safely deploy a new version without routing any traffic to it initially.

Figure 12.13. Deploying Restaurant Service as AWS Lambda functions. The AWS API Gateway routes HTTP requests to the AWS Lambda functions, which are implemented by request handler classes defined by Restaurant Service.

Figure 12.14. The design of the AWS Lambda-based Restaurant Service. The presentation layer consists of request handler classes, which implement the lambda functions. They invoke the business tier, which is written in a traditional style consisting of a service class, an entity, and a repository.

Figure 12.15. The design of the request handler classes. The abstract superclasses implement dependency injection and error handling.

第 13 章.重构为微服务

Chapter 13. Refactoring to microservices

图 13.1.整体式应用程序将逐步替换为由服务组成的 strangler 应用程序。最终,这个 monolith 将完全替换为 strangler 应用程序或成为另一个微服务。

图 13.2.新功能作为 strangler 应用程序的一部分的服务实现。集成胶水集成 该服务,由实现同步和异步 API 的适配器组成。API 网关路由 调用服务的新功能的请求。

图 13.3.将前端与后端分开,可以独立部署每个 VPN。它还公开了一个 API 服务来调用。

图 13.4.通过提取服务来拆分整体式架构。您确定一个功能切片,它由 business 组成 logic 和 adapters,以提取到 Service 中。您将该代码移动到服务中。新提取的服务和整体式 通过 Integration Glue 提供的 API 进行协作。

图 13.5.Order 域类具有对 Restaurant 类的引用。如果我们将 Order 提取到单独的服务中,则 需要对它对 Restaurant 的引用做一些事情,因为进程之间的对象引用没有意义。

图 13.6.Order 类对 Restaurant 的引用将替换为 Restaurant 的主键,以消除 跨越进程边界的对象。

图 13.7.通过从新提取的 FTGO 单体式应用中复制与交付相关的数据,最大限度地减少对 FTGO 整体式应用的更改范围 Delivery Service 返回到整体式架构的数据库。

图 13.8.将 Mono Slim 迁移到微服务时,服务和 MonoLith 通常需要访问彼此的数据。 集成胶水促进了这种交互,该胶水由实现 API 的适配器组成。某些 API 是消息传递 基于。其他 API 基于 RPI。

图 13.9.实现 CustomerContactInfoRepository 接口的适配器调用整体式架构的 REST API 来检索 客户信息。

图 13.10.集成胶水将数据从 Mono 复制到服务。Monolith 发布域事件, 服务实现的事件处理程序将更新服务的数据库。

图 13.11.调用整体式应用的服务适配器必须在服务的域模型和整体式应用的 domain 模型。

图 13.12.事件处理程序必须从事件发布者的域模型转换为订阅者的域模型。

图 13.13.登录处理程序得到了增强,可以设置 USERINFO Cookie,这是一个包含用户信息的 JWT。API 网关 在调用服务时将 USERINFO Cookie 传输到授权标头。

图 13.14.延迟送货服务的设计。集成胶水提供对数据的 Delayed Delivery Service 访问 由整体式应用程序拥有,例如 Order 和 Restaurant 实体,以及客户联系信息。

图 13.15.集成胶水为 Delayed Delivery Service 提供对整体式应用拥有的数据的访问权限。

图 13.16.交付管理与 FTGO 整体中的订单管理纠缠在一起。

图 13.17.提取 Delivery Service 后 FTGO 应用程序的高级视图。FTGO 单体式应用和交付 服务使用集成胶水进行协作,每个集成胶水都由其中的 API 组成。需要做出的两个关键决策 将哪些功能和数据移动到 Delivery Service,以及 Monolith 和 Delivery Service 如何协作 通过 API?

图 13.18.由投放管理访问的实体和字段以及由 整体。字段可以是读取和/或写入的。它可以通过 Delivery Management 和/或 Monolith 进行访问。

图 13.19.Delivery Service 的域模型的设计

图 13.20.Delivery Service 集成胶水的设计。Delivery Service 具有投放管理 API。服务 FTGO 单体通过交换域事件来同步数据。

图 13.21.第一步是定义 DeliveryService,它是一个粗粒度的、可远程处理的 API,用于调用传递 管理逻辑。

图 13.22.第二步是将 FTGO 整体更改为通过 DeliveryService 接口调用交付管理。

图 13.23.最后一步是使用发送消息 Delivery Service 的代理类实现 DeliveryService。功能 toggle 控制 FTGO 整体是使用旧实现还是新的 Delivery Service。

Figure 13.1. The monolith is incrementally replaced by a strangler application comprised of services. Eventually, the monolith is replaced entirely by the strangler application or becomes another microservice.

Figure 13.2. A new feature is implemented as a service that’s part of the strangler application. The integration glue integrates the service with the monolith and consists of adapters that implement synchronous and asynchronous APIs. An API gateway routes requests that invoke new functionality to the service.

Figure 13.3. Splitting the frontend from the backend enables each to be deployed independently. It also exposes an API for services to invoke.

Figure 13.4. Break apart the monolith by extracting services. You identify a slice of functionality, which consists of business logic and adapters, to extract into a service. You move that code into the service. The newly extracted service and the monolith collaborate via the APIs provided by the integration glue.

Figure 13.5. The Order domain class has a reference to a Restaurant class. If we extract Order into a separate service, we need to do something about its reference to Restaurant, because object references between processes don’t make sense.

Figure 13.6. The Order class’s reference to Restaurant is replaced with the Restaurant’s primary key in order to eliminate an object that would span process boundaries.

Figure 13.7. Minimize the scope of the changes to the FTGO monolith by replicating delivery-related data from the newly extracted Delivery Service back to the monolith’s database.

Figure 13.8. When migrating a monolith to microservices, the services and monolith often need to access each other’s data. This interaction is facilitated by the integration glue, which consists of adapters that implement APIs. Some APIs are messaging based. Other APIs are RPI based.

Figure 13.9. The adapter that implements the CustomerContactInfoRepository interface invokes the monolith’s REST API to retrieve the customer information.

Figure 13.10. The integration glue replicates data from the monolith to the service. The monolith publishes domain events, and an event handler implemented by the service updates the service’s database.

Figure 13.11. A service adapter that invokes the monolith must translate between the service’s domain model and the monolith’s domain model.

Figure 13.12. An event handler must translate from the event publisher’s domain model to the subscriber’s domain model.

Figure 13.13. The login handler is enhanced to set a USERINFO cookie, which is a JWT containing user information. API Gateway transfers the USERINFO cookie to an authorization header when it invokes a service.

Figure 13.14. The design of Delayed Delivery Service. The integration glue provides Delayed Delivery Service access to data owned by the monolith, such as the Order and Restaurant entities, and the customer contact information.

Figure 13.15. The integration glue provides Delayed Delivery Service with access to the data owned by the monolith.

Figure 13.16. Delivery management is entangled with order management within the FTGO monolith.

Figure 13.17. The high-level view of the FTGO application after extracting Delivery Service. The FTGO monolith and Delivery Service collaborate using the integration glue, which consists of APIs in each of them. The two key decisions that need to be made are which functionality and data are moved to Delivery Service and how do the monolith and Delivery Service collaborate via APIs?

Figure 13.18. The entities and fields that are accessed by delivery management and other functionality implemented by the monolith. A field can be read or written or both. It can be accessed by delivery management, the monolith, or both.

Figure 13.19. The design of the Delivery Service’s domain model

Figure 13.20. The design of the Delivery Service integration glue. Delivery Service has a delivery management API. The service and the FTGO monolith synchronize data by exchanging domain events.

Figure 13.21. The first step is to define DeliveryService, which is a coarse-grained, remotable API for invoking the delivery management logic.

Figure 13.22. The second step is to change the FTGO monolith to invoke delivery management via the DeliveryService interface.

Figure 13.23. The final step is to implement DeliveryService with a proxy class that sends messages Delivery Service. A feature toggle controls whether the FTGO monolith uses the old implementation or the new Delivery Service.

表格列表

List of Tables

第 1 章.逃离巨石地狱

Chapter 1. Escaping monolithic hell

表 1.1.比较 SOA 与微服务

Table 1.1. Comparing SOA with microservices

第 2 章.分解策略

Chapter 2. Decomposition strategies

表 2.1.FTGO 应用程序的关键系统命令

表 2.2.将系统操作映射到 FTGO 应用程序中的服务

表 2.3.服务、其修订后的 API 及其协作者

Table 2.1. Key system commands for the FTGO application

Table 2.2. Mapping system operations to services in the FTGO application

Table 2.3. The services, their revised APIs, and their collaborators

第 3 章.微服务架构中的进程间通信

Chapter 3. Interprocess communication in a microservice architecture

表 3.1.各种交互方式可以用两个维度来表征:一对一 vs 一对多和同步 与异步。

表 3.2.每个消息代理都以不同的方式实现消息通道概念。

Table 3.1. The various interaction styles can be characterized in two dimensions: one-to-one vs one-to-many and synchronous vs asynchronous.

Table 3.2. Each message broker implements the message channel concept in a different way.

第 4 章.使用 saga 管理事务

Chapter 4. Managing transactions with sagas

表 4.1.Create Order Saga 的补偿交易

Table 4.1. The compensating transactions for the Create Order Saga

第 6 章.使用事件溯源开发业务逻辑

Chapter 6. Developing business logic with event sourcing

表 6.1.应用程序事件演变的不同方式

Table 6.1. The different ways that an application’s events can evolve

第 7 章.在微服务架构中实现查询

Chapter 7. Implementing queries in a microservice architecture

表 7.1.查询端视图存储

Table 7.1. Query-side view stores

第 10 章.测试微服务:第 2 部分

Chapter 10. Testing microservices: Part 2

表 10.1.合同的结构取决于服务之间的交互类型。

Table 10.1. The structure of a contract depends on the type of interaction between the services.

列表

List of Listings

第 3 章.微服务架构中的进程间通信

Chapter 3. Interprocess communication in a microservice architecture

清单 3.1.Order Service 的 gRPC API 摘录

Listing 3.1. An excerpt of the gRPC API for the Order Service

第 4 章.使用 saga 管理事务

Chapter 4. Managing transactions with sagas

清单 4.1.OrderService 类及其 createOrder() 方法

清单 4.2.CreateOrderSaga 的定义

清单 4.3.传奇第三步的定义

清单 4.4.CreateOrderSagaState 存储 saga 实例的状态

清单 4.5.KitchenServiceProxy 定义 Kitchen Service 的命令消息端点

清单 4.6.Order Service 的命令处理程序

清单 4.7.OrderServiceConfiguration 是一个 Spring @Configuration 类,它为 Order Service 定义 Spring @Beans。

Listing 4.1. The OrderService class and its createOrder() method

Listing 4.2. The definition of the CreateOrderSaga

Listing 4.3. The definition of the third step of the saga

Listing 4.4. CreateOrderSagaState stores the state of a saga instance

Listing 4.5. KitchenServiceProxy defines the command message endpoints for Kitchen Service

Listing 4.6. The command handlers for Order Service

Listing 4.7. The OrderServiceConfiguration is a Spring @Configuration class that defines the Spring @Beans for the Order Service.

第 5 章.在微服务架构中设计业务逻辑

Chapter 5. Designing business logic in a microservice architecture

清单 5.1.OrderCreated 事件和 DomainEventEnvelope 类

清单 5.2.扩充的 OrderCreated 事件

清单 5.3.Ticket 聚合的 accept() 方法

清单 5.4.KitchenService 调用 Ticket.accept()

清单 5.5.Ticket 扩展了一个超类,该超类记录了域事件

清单 5.6.Eventuate Tram 框架的 DomainEventPublisher 接口

清单 5.7.类型安全域事件发布者的抽象超类

清单 5.8.用于发布 Ticket 聚合的域事件的类型安全接口

清单 5.9.将事件调度到事件处理程序方法

清单 5.10.Ticket 类的一部分,它是一个 JPA 实体

清单 5.11.一些 Ticket 的方法

清单 5.12.服务的 accept() 方法更新 Ticket

清单 5.13.处理 saga 发送的命令消息

清单 5.14.Order 类及其字段

清单 5.15.在订单创建过程中调用的方法

清单 5.16.用于修改 Order 的 Order 方法

清单 5.17.OrderService 类具有用于创建和管理订单的方法

Listing 5.1. The OrderCreated event and the DomainEventEnvelope class

Listing 5.2. The enriched OrderCreated event

Listing 5.3. The Ticket aggregate’s accept() method

Listing 5.4. KitchenService calls Ticket.accept()

Listing 5.5. The Ticket extends a superclass, which records domain events

Listing 5.6. The Eventuate Tram framework’s DomainEventPublisher interface

Listing 5.7. The abstract superclass of type-safe domain event publishers

Listing 5.8. A type-safe interface for publishing Ticket aggregates’ domain events

Listing 5.9. Dispatching events to event handler methods

Listing 5.10. Part of the Ticket class, which is a JPA entity

Listing 5.11. Some of the Ticket’s methods

Listing 5.12. The service’s accept() method updates Ticket

Listing 5.13. Handling command messages sent by sagas

Listing 5.14. The Order class and its fields

Listing 5.15. The methods that are invoked during order creation

Listing 5.16. The Order method for revising an Order

Listing 5.17. The OrderService class has methods for creating and managing orders

第 6 章.使用事件溯源开发业务逻辑

Chapter 6. Developing business logic with event sourcing

清单 6.1.Order 聚合的字段及其初始化实例的方法

清单 6.2.修改 Order 聚合的 process() 和 apply() 方法

清单 6.3.Order 类的 Eventuate 版本

清单 6.4.OrderService 使用 AggregateRepository

清单 6.5.OrderCreatedEvent 的事件处理程序

清单 6.6.处理 saga 发送的命令消息

Listing 6.1. The Order aggregate’s fields and its methods that initialize an instance

Listing 6.2. The process() and apply() methods that revise an Order aggregate

Listing 6.3. The Eventuate version of the Order class

Listing 6.4. OrderService uses an AggregateRepository

Listing 6.5. An event handler for OrderCreatedEvent

Listing 6.6. Handles command messages sent by sagas

第 7 章.在微服务架构中实现查询

Chapter 7. Implementing queries in a microservice architecture

清单 7.1.调用 OrderHistoryDao 的事件处理程序

清单 7.2.addOrder() 方法添加或更新 Order

清单 7.3.notePickedUp() 方法将订单状态更改为 PICKED_UP

清单 7.4.idempotentUpdate() 方法忽略重复事件

清单 7.5.findOrderHistory() 方法检索消费者的匹配订单

Listing 7.1. Event handlers that call the OrderHistoryDao

Listing 7.2. The addOrder() method adds or updates an Order

Listing 7.3. The notePickedUp() method changes the order status to PICKED_UP

Listing 7.4. The idempotentUpdate() method ignores duplicate events

Listing 7.5. The findOrderHistory() method retrieves a consumer’s matching orders

第 8 章.外部 API 模式

Chapter 8. External API patterns

清单 8.1.通过按顺序调用后端服务来获取订单详细信息

清单 8.2.实现 /orders 端点的 Spring @Beans

清单 8.3.后端服务 URL 的外部化配置

清单 8.4.OrderHandlers 类实现自定义请求处理逻辑。

清单 8.5.OrderService 类 - Order Service 的远程代理

清单 8.6.API 网关的 main() 方法

清单 8.7.FTGO API 网关的 GraphQL 架构

清单 8.8.将解析程序函数附加到 GraphQL 架构的字段

清单 8.9.使用 DataLoader 优化对 Restaurant Service 的调用

清单 8.10.将 GraphQL 服务器与 Express Web 框架集成

清单 8.11.使用 Apollo GraphQL 客户端执行查询

Listing 8.1. Fetching the order details by calling the backend services sequentially

Listing 8.2. The Spring @Beans that implement the /orders endpoints

Listing 8.3. The externalized configuration of backend service URLs

Listing 8.4. The OrderHandlers class implements custom request-handling logic.

Listing 8.5. OrderService class—a remote proxy for Order Service

Listing 8.6. The main() method for the API gateway

Listing 8.7. The GraphQL schema for the FTGO API gateway

Listing 8.8. Attaching the resolver functions to fields of the GraphQL schema

Listing 8.9. Using a DataLoader to optimize calls to Restaurant Service

Listing 8.10. Integrating the GraphQL server with the Express web framework

Listing 8.11. Using the Apollo GraphQL client to execute queries

第 9 章.测试微服务:第 1 部分

Chapter 9. Testing microservices: Part 1

清单 9.1.描述 API Gateway 如何调用 Order Service 的合约

清单 9.2.Order 实体的简单、快速运行的单元测试

清单 9.3.对 Money value 对象进行简单、快速运行的测试

清单 9.4.CreateOrderSaga 的简单、快速运行的单元测试

清单 9.5.用于 OrderService 类的简单、快速运行的单元测试

清单 9.6.用于 OrderController 类的简单、快速运行的单元测试

清单 9.7.OrderEventConsumer 类的快速运行单元测试

Listing 9.1. A contract that describes how API Gateway invokes Order Service

Listing 9.2. A simple, fast-running unit test for the Order entity

Listing 9.3. A simple, fast-running test for the Money value object

Listing 9.4. A simple, fast-running unit test for CreateOrderSaga

Listing 9.5. A simple, fast-running unit test for the OrderService class

Listing 9.6. A simple, fast-running unit test for the OrderController class

Listing 9.7. A fast-running unit test for the OrderEventConsumer class

第 10 章.测试微服务:第 2 部分

Chapter 10. Testing microservices: Part 2

清单 10.1.验证 Order 是否可以持久化的集成测试

清单 10.2.描述基于 HTTP 的请求/响应样式交互的协定

清单 10.3.由 Spring Cloud Contract 生成的测试代码的抽象基类

清单 10.4.API Gateway 的 OrderServiceProxy 的消费者端集成测试

清单 10.5.发布/订阅交互样式的协定

清单 10.6.Spring Cloud Contract 提供者端测试的抽象基类

清单 10.7.OrderHistoryEventHandlers 类的使用者端集成测试

清单 10.8.描述 Order Service 如何异步调用 Kitchen Service 的合约

清单 10.9.Order Service 的消费者端合约集成测试

清单 10.10.Kitchen Service 的提供者端、消费者驱动的契约测试的超类

清单 10.11.Place Order 功能的 Gherkin 定义及其一些场景

清单 10.12.Java 步骤定义类使 Gherkin 场景可执行。

清单 10.13.@GivenuseCreditCard() 方法使用 ...credit card 步骤。

清单 10.14.placeOrder() 方法定义当我在 Ajanta 为 Chicken Vindaloo 下订单时步骤。

清单 10.15.@ThentheOrderShouldBe() 方法验证 HTTP 请求是否成功。

清单 10.16.用于 Order Service 组件测试的 Cucumber 步骤定义类

清单 10.17.基于 Gherkin 的用户旅程规范

Listing 10.1. An integration test that verifies that an Order can be persisted

Listing 10.2. A contract that describes an HTTP-based request/response style interaction

Listing 10.3. The abstract base class for the tests code-generated by Spring Cloud Contract

Listing 10.4. A consumer-side integration test for API Gateway’s OrderServiceProxy

Listing 10.5. A contract for a publish/subscribe interaction style

Listing 10.6. The abstract base class for the Spring Cloud Contract provider-side tests

Listing 10.7. The consumer-side integration test for the OrderHistoryEventHandlers class

Listing 10.8. Contract describing how Order Service asynchronously invokes Kitchen Service

Listing 10.9. The consumer-side contract integration test for Order Service

Listing 10.10. Superclass of provider-side, consumer-driven contract tests for Kitchen Service

Listing 10.11. The Gherkin definition of the Place Order feature and some of its scenarios

Listing 10.12. The Java step definitions class makes the Gherkin scenarios executable.

Listing 10.13. The @GivenuseCreditCard() method defines the meaning of the Given using ... credit card step.

Listing 10.14. The placeOrder() method defines the When I place an order for Chicken Vindaloo at Ajanta step.

Listing 10.15. The @ThentheOrderShouldBe() method verifies HTTP request was successful.

Listing 10.16. The Cucumber step definitions class for the Order Service component tests

Listing 10.17. A Gherkin-based specification of a user journey

第 11 章.开发生产就绪型 Service

Chapter 11. Developing production-ready services

清单 11.1.OrderService 跟踪下达、批准和拒绝的订单数量。

Listing 11.1. OrderService tracks the number of orders placed, approved, and rejected.

第 12 章.部署微服务

Chapter 12. Deploying microservices

清单 12.1.用于构建 Restaurant Service 的 Dockerfile

清单 12.2.用于为 Restaurant Service 构建容器镜像的 shell 命令

清单 12.3.使用 docker run 运行容器化服务

清单 12.4.ftgo-restaurant-service 的 Kubernetes 部署

清单 12.5.ftgo-restaurant-service 的 Kubernetes 服务的 YAML 定义

清单 12.6.将流量路由到 Consumer Service 的 8082 端口的 NodePort 服务的 YAML 定义

清单 12.7.使用 Istio 部署 Consumer Service

清单 12.8.Java lambda 函数是实现 RequestHandler 接口的类。

清单 12.9.GET /restaurant/{restaurantId} 的处理程序类

清单 12.10.实现依赖注入的抽象 RequestHandler

清单 12.11.一个抽象的 RequestHandler,用于捕获异常并返回 500 HTTP 响应

清单 12.12.serverless.yml 部署 Restaurant Service。

Listing 12.1. The Dockerfile used to build Restaurant Service

Listing 12.2. The shell commands used to build the container image for Restaurant Service

Listing 12.3. Using docker run to run a containerized service

Listing 12.4. Kubernetes Deployment for ftgo-restaurant-service

Listing 12.5. The YAML definition of the Kubernetes service for ftgo-restaurant-service

Listing 12.6. The YAML definition of a NodePort service that routes traffic to port 8082 of Consumer Service

Listing 12.7. Deploying Consumer Service with Istio

Listing 12.8. A Java lambda function is a class that implements the RequestHandler interface.

Listing 12.9. The handler class for GET /restaurant/{restaurantId}

Listing 12.10. An abstract RequestHandler that implements dependency injection

Listing 12.11. An abstract RequestHandler that catches exceptions and returns a 500 HTTP response

Listing 12.12. The serverless.yml deploys Restaurant Service.